Jared Alessandroni is the CTO of Cultural Intelligence Agency Sparks & Honey and he talks about the development of a SAAS platform to make the huge amount of cultural data and insights they collect more accessible. But more importantly he sees the time when CMOs will have an AI to process, sort, synthesise and provide a point of view to CMOs and even CEOs to inform their decision making.
You can listen to the podcast here:
Follow Managing Marketing on Soundcloud, TuneIn, Stitcher, Spotify and Apple Podcast.
Transcription:
Darren:
Welcome to Managing Marketing, and today I’m in New York City and I’ve returned to Sparks and Honey on Madison and have the opportunity to sit down with Jared Allessandroni who is the CTO, Chief Technical Officer at Sparks and Honey. Welcome, Jared.
Jared:
Thank you so much. It’s my pleasure to be here and a pleasure to be on Madison Avenue—such a fun way to have this kind of podcast; it’s the place to be.
Darren:
A very traditional place for advertising. I guess what we were going to talk about today is if not the future of advertising, where advertising, communications, and marketing is today but before we get there I should have addressed you as Doctor because you have a PhD in…
Jared:
Cognitive Neuroscience. I went to Colombia. First of all for my undergrad I went to a school in New Hampshire; I wanted to be a teacher. I was like, I’m going to change the world and start a school (and I did by the way) and I was really excited about it.
Then I got into academic neuroscience. Academia was something I didn’t know enough about. A lot of my friends in this space had parents or cousins or uncles who were academics. I’m in the unique situation of having almost nobody in my family who was an academic so I didn’t realise what it meant to go into that universe.
And once I got into it I did not like it at all.
Darren:
It’s a very different world isn’t it?
Jared:
It is, the motivations are different and the way of thinking is. First of all, the obsession with publishing changes the way that you think about thought. If you think about why you care about something, your motivation can either be that you care about it because you love the output, the thing it’s going to become or you care about it because you think you should care about it.
And academia, in many ways, is about people justifying the thing they’re excited about rather than saying, ‘I’m excited about this new thing now, goodbye this’.
Darren:
That’s right because the longer you’re in academia the more you end up in a channel of a particular specialty don’t you? And that’s one of the things when I moved from medical research into advertising and people said ‘why?’. It was because I was heading down a career which would be a mile deep but only an inch wide.
I’ve ended up in an area where it’s a mile wide and an inch deep, which has infinitely more possibility.
Jared:
I think it’s about passion. I was into pre-frontal cortal development in the way we develop our abilities particularly around language and speech. I was interested in speech recognition centres and how those things change over time. I was very passionate about it.
In 2003 we really untapped the FMRI (Functional Magnetic Resonance Imaging), which is the tool that will allow you to stand up and talk while it’s scanning your brain. It’s an amazing time to be in cogno-neuroscience. Then I started looking at my peers, the people who were studying this stuff and I went, ‘wait a minute, you guys have already studied this stuff for 5 or 10 years; aren’t you bored?’
I get it; you’re not going to uncover all the mysteries of the human mind at least not in our lifetimes. I’d like to be wrong on that; don’t hold me to it. But don’t you personally wonder what’s happening in the automotive industry? What happens if you’re excited about the way iPhones are going to be sold in 10 years’ time?
These are just hugely different categories—don’t get bored? So, as soon as I finished that I went to the Bronx; I founded my charter school, which is a wonderful place if you ever get bored and I decided to go into technology.
Darren:
It’s interesting how innovation and creativity happens across disciplines, categories, and industries.
Jared:
You’re just setting me up. That’s the very basis of what Sparks and Honey does, which is we look at culture on the horizontal.
Darren:
I think that’s important. Clients end up in particular categories. I know marketers who have only ever been in the automotive category and they start to believe that there is a particular way of marketing automotives.
They’re in Pharma and they think pharma is the only way. They often don’t see that opportunity, the insight is to not think about best practice in your category but to look across categories to see what’s happening elsewhere.
Jared:
I’m excited because you say automotive—advertising men think of ‘think small’ like I think of what you do. Just to give you a very abstract version of this. When I was a kid I used to move my bedroom around a lot, my bed here, my desk here. What happens is that we get a sameness bias where we like something to be the same for a while and then we hate it to be the same.
We have these ebbs and flows. We have this way of thinking whether it’s advertisements or the way our house looks or anything in our lives we’re kind of used to. We get to a point where we absolutely need it to change and different people reach that point at different times.
And whenever it does every single person who was in advertising who was thinking the same way, every person who had the best practices for pharma they become obsolete in a day. And that is the evolution. That is why creativity, this passion has to be something dynamic.
Darren:
It’s quite a paradox though isn’t it? On one side, I agree with you; we are attracted to the new because we want change but then there’s other change we often resist and that’s fundamental change. It’s almost the superficial change but when it’s fundamental change, cultural change that’s happening and the older we get the more resistant we get to that change for many people.
Jared:
You’re talking about plasticity and that’s a really exciting thing to talk about. At Sparks and Honey we have a few interns now that are Gen Z and they want to be called founders now and there are these young kids here now and it just blows your mind to see how young they are and how consciously anti whatever the old thing was they are.
And I think that plasticity, that ability to be really elastic, in cognition, for example. One of the things we’re really interested in is that moment where language becomes thought. You can take a 9 or 10-year old and have them sound like a native speaker in almost any language. If you turn 14 you’re locked up.
In the same way we reach an age (and I don’t want to give a number).
Darren:
And it will vary from person to person and their attitude and people make conscious decisions to avoid locking themselves.
Jared:
I like what you said, conscious decisions.
Darren:
I remember it happened to me; someone asked me for some recommendations on music and I put out some music ideas and they went, ‘oh, these are so 90s’. And I suddenly realised I’d stopped listening to new music. So I said to them, ‘what should I be listening to?’ And they gave me a list of about 50 different tracks and it completely opened up my mind to the fact that there is new, different, and interesting.
Jared:
Right but you know what’s interesting is what your brain had to do to start liking that new music. What’s interesting is the amount of energy. There’s this rule; you have to hear a poem 3 times, a song 3 times before you can really appreciate it.
As we get older it takes more mental energy, more cognitive distance, more cognitive challenge to listen to that and not just turn it off. It’s the same with poems, songs, even TV shows; it’s actually harder. One of the interesting things on Netflix is the resurgence of shows like The Office because it takes a lot less energy to watch a show like that because somewhere in our brains we got comfortable with the cadence of that show.
In the same way the cadence of that music is something we’re comfortable with; we don’t have to work at it. In the same way we have to look at how our marketing has gotten comfortable and we have to take a lot of energy to push out of it.
Darren:
This isn’t something that’s just particular to human beings. I know there are studies in lots of mammals and it’s the same thing, that as we get older we’re inclined to filter out more. And you’re saying it’s because of the energy it takes.
Jared:
There’s an interesting way to measure it. Essentially, if we look at FMRI (and I’ll probably say FMRI a million times) of an 8-year old listening to music and one of a 25-year old listening to music that’s brand new to them, what we are looking for are patterns.
Have you ever heard that really cool effect where they take the sound of the eardrum when music is coming in? What we find is that music that is consonant sounds very flat (sounds the same) and music that is very dissonant with a lot of anti-patterns in it actually mixes very rumbly—non-patterned.
There’s a great podcast (Radiolab) on the way that sound attaches to our ears. When that reaches us we develop schema for understanding it at a very micro level. Our brain is smart enough to take those rumbling patterns and turn them into larger patterns.
And that same plasticity that allows us to learn a new language when we’re 8 is much harder to achieve when we’re much older so what happens is those rumbles (when we’re older) are actually grating on us.
Darren:
Jared, I have to thank you because you’ve just explained why I struggle with learning Mandarin; my wife speaks Mandarin. So, I’m going to use that excuse; it’s this brain of mine and it’s just got to the point where it can’t learn anything.
One of the things is that with the volume of information, of structure and unstructured data coming in all the time that people filter too much. Does this mean that potentially as people get older they’re filtering more and actually not considering all the possibilities.
Jared:
I like the word filtering because the most dangerous thing is the filter you make without knowing you’re filtering it. If you were in a store yesterday and someone said to you, ‘hey, what music did you listen to yesterday?’ Would you be able to identify this new song you heard that you had only heard in the background of a store because you consciously filter out a lot of the stuff you didn’t know you didn’t want to filter out?
I think in the same way with unstructured data, what we’re looking at is now we have these very big opportunities to see everything but when you have so much to see you not only have the filter that you naturally attach to which is as you get older you filter things through a certain lens. But also you just have so much data you start to make conscious choices.
And those choices are made on a huge bias. If you think of all of the training and go back to your automotive marketing executive who has done it the same way for years and years. Even if they wanted to, if you put that brand new piece of music in front of them they really wouldn’t see it.
Darren:
This has a big implication for decision making in organisations because obviously senior people have more experience therefore they’re more likely to filter and rely on their biases of the way they’ve always done things.
Jared:
Even if they tell themselves they’re not?
Darren:
Exactly. I can imagine the work that happens here at Sparks and Honey generates millions of pieces of structured and unstructured data.
Jared:
It does indeed.
Darren:
But you’ve been involved in a platform that actually pulls this together and allows people to have access to it?
Jared:
Well, if you look at all the data in the universe; it’s everything from social media data to academic papers to news articles. There’s so much of that we, at Sparks and Honey, have always been under the impression that if only somebody could put it all in one place and start to understand it then maybe we could better understand culture as a whole.
And the way to do that isn’t to just take the data all over the place but as you’ve indicated, to really put it in a structured meaningful way. And we do that with something called the elements of culture.
So, if you take every piece of culture and tag it to find some meaning that comes out of it you end up with a somewhat discrete list of things that can help you understand what’s going on in culture.
To give you an example. One of the things that surprises most people when they hear it is how many people are now buying what we call adult Legos. So you go to the Lego store—I don’t know if you have any kids.
Darren:
I do, I have 2 twin two-year olds.
Jared:
That’s a great age and also twins are super fun. But you go to the Lego store and there is all the kid’s stuff but there are also these $400 make a Lego car this big. It’s for people wanting to build this very adult, very expensive thing that’s a kid’s toy. What does that represent in culture?
How do I look at that idea and connect it to very similar other ideas, not just model making, which is a very good parallel, but also what about the resurgence of Alf (the TV show)? What about the resurgence of some of these old classic cartoons?
All of those come together at Sparks and Honey in this concept of kidult. Kidult is an element of culture where we’ve looked at all these weird anomalies and categorised them.
Darren:
The people I know who are kidults, buy those Lego sets, build them, and then they have them there almost as trophies. More than modelling, you know the old days of Revel and Airfix and all those plastic models you would buy and put together? They’re doing the same with Lego. In fact, the Lego movie touched on that because the father was the one who wanted to glue it all together so the kids couldn’t play with it.
It’s almost the antithesis of why the toy exists in the first place because it’s about exploring creativity.
Jared:
In many ways what Kidult represents is a mutation of the initial form. And if you take that idea and it’s a very similar idea with the fact of the Lego movie being so popular with adults in the first place.
In all of these cases you have an idea that you had during childhood and then it gets mutated over time and now you have these ideas as an adult and that’s what Kidult is all about. So how does your marketing team, whether you’re Ford or Pepsi, take this understanding and translate that forward into how am I going to market to people who might have similar aspirations?
Darren:
So just to make it clear for me, this platform, what’s it called?
Jared:
It’s called Q Insight right now but we’re working with lawyers to get a better name.
Darren:
So, that’s a place where not only can you access these trends in an accessible form but also start looking at where that’s impacting across categories and potentially be able to look at what’s the impact that it could have on your business.
Jared:
Absolutely. And in fact we were talking earlier before we started recording about the line between machine learning and AI. So, let’s talk about that through the lens of this product. The product can give you all those elements of culture and all of those signals that underlie them. That’s really a machine learning play.
To have millions of signals, and to give you an idea, this month we ingested 2 million signals. That’s tweets, Reddit threads, YouTube videos. They all come into the system and then we normalise them and that brings us into the structured data idea.
We normalise them in say here’s an article that involves adults playing Lego therefore it’s Kidult; it’s been normalised and put in the system.
Darren:
It’s a framework.
Jared:
Right. It’s a framework for normalising and that’s a machine learning idea. Now, the next step is now that I know what each of these things are, let’s start slicing and dicing it. What’s happening in Ohio? How is that different to what’s happening in Shanghai?
So, that’s this idea of taking this normalised data and categorising it. All of that is machine learning.
Darren:
And applying a cultural context.
Jared:
Of course and attaching potentially a sales context as well because we connect to your DMP so we can look at the demographics of people who are looking at these particular things. So, those are all machine learning.
What makes it AI and takes it to the next level is to analyse or predict things like future state. For example (this is a very standard marketing example), if I am a women buying diapers for the first time—it’s a pivot. If we look at the failure of the beauty industry right now to sell makeup—we saw a 15 to 20% dip in makeup sales in the last few years.
Why? Because millennial mums are having babies so now AI allows us to take the current state of culture and then predict the future state. To take all of that input and then build models against what’s going to be happening in 6 months, a year, 2 years.
Darren:
So, the AdTech, Martech area is alive with everyone saying, ‘oh AI this and AI that’ and when you actually look at it most of it is machine learning, pattern recognition. It’s not AI. My concern is that artificial intelligence has become the latest thing, like putting organic onto a juice container.
It’s one of my concerns because I think we need to be very clear because the potential of each is quite different as you’ve just said. The ability to be able to take huge amounts of data, categorise it, identify patterns, and then be able to extrapolate that pattern forward and call it a prediction is not necessarily AI is it?
Jared:
I think the question is how you do it? Machine learning would just assume that patterns from the past would be grouped and then they would follow into the future.
Darren:
Even in certain circumstances. I could take 100s of different parameters and look at how they influence that and would assume the same going forward.
Jared:
AI is deciding what groups they are. AI is making predictions, testing them against itself and then trying to find other variables that might give you clearer predictions. To give you a clearer idea, remember the fidget spinners? I don’t know how much I’m dating myself here but fidget spinners and snap bracelets are really good stories to tell when it comes to prediction.
The length of trends has shrunk dramatically. Machine learning theoretically would assume a linear curve; it would take models against the sales data for snap bracelets and the sales data for fidget spinners and theoretically could extrapolate some pattern for the next big if quick thing.
What AI would do is it would look at that and say it’s not enough data and would try and compare it to other similar things from beanie babies all the way to the way that neon pads worked out. And it would use that information in a long term way to get more data.
I would say the difference between machine learning and AI is with machine learning the computers are never going to stop and ask you for more data whereas with AI at some point, AI is going to ask do you have marketing data for China?
Darren:
So it almost has some awareness?
Jared:
Yes.
Darren:
And I use the term very loosely—awareness of the fact that there is not enough information to be able to project forward with any sort of accuracy.
Jared:
In fact, maybe it’s the first sort of technology that will whine at us.
Darren:
Give me more.
Jared:
But I think the other thing that makes AI interesting is it will have P.O.V. When we talk about what consciousness is, in a way it’s about having a point of view. And right now AI, even though there is some stuff happening with DARPA, there are very few things that actually have a point of view that might be different from what was programmed in.
I think what’s going to be interesting is when we start defining AI, not only was it able to achieve this task but how did Jared’s AI differ from Darren’s AI? How did their point of views change the way they extrapolated this data? Because with AI, ironically, we’re moving further away from a single source of truth. In fact we’re moving towards a truth based on sources.
Darren:
I know it’s a little outdated but you know that hierarchy of information? Data, information, insight, knowledge and wisdom—this is actually working across all of those?
Jared:
Yes, and wisdom in a way is at multiple stages, which is different from our current model. That model is based on the way we teach and the way we think.
Darren:
A linear model rather than a network model?
Jared:
Exactly, and what happens as it assumes you have one brain. One of the things that a lot of thinkers have failed to understand about AI is that it’s not one brain. And when you have multiple POBs and multiple ad servers, it’s much more equivalent to the internet or some way of human beings interacting with each other and coming up with an answer.
And that means it’s very likely AI is going to very consciously give you multiple answers.
Darren:
I belong to the ethics centre and this comes up as a constant discussion around the ethics and how can ethics be built into AI?
Jared:
Which I still think is a should be but I’ll take that. One of the things that I think Asimov makes this point.
Darren:
You think it should be built in?
Jared:
No, my point was should it? Your honour, I didn’t kill my wife; I’m not married. You have to decide whether or not it’s a good question to even ask. I think the challenge, and going back to the Asimov reference, is if we have a moral sense, can that moral sense not be perverted?
The standard Sci-fi trope of the robot who, in order to save humanity, enslaves humanity. It’s a question of empowering AI of course but I think the real question is is it possible that we disentangle this idea of morality and instead start thinking about outcome.
I don’t want to give my AI a sense of morality because it could turn bad but I do want to give my AI a lot of barriers so it doesn’t go past something without a certain amount of control. I’m not saying that’s a final endpoint but I do think we’re going to have to grapple with the fact that morality as a programming concept is something that can easily be twisted.
And unlike a human an AI isn’t going to have one morality. And I think it’s going to be very interesting how that ends up but there are a lot of great answers out there.
Darren:
It certainly provides a terrific tool. Going back to our earlier conversation, we’ve got major decision-makers who have got to a point in life where they’re filtering all of the information coming to them or it’s coming through particular filters that have an agenda and they’re basing their decisions on this.
Wouldn’t this be the future of decision-making within business, government, organisations? Do you see an opportunity to have AI’s multiple thoughts and processes of data informing those decisions very quickly?
Jared:
I think it’s a very exciting place to be. Sparks and Honey is actually building our idea of what we call the AI CEO or AI CMO. The idea is, not necessarily that we’re replacing that role but that we’re creating a wall of information for that person to make meaningful decisions based on those edge cases as well as the central cases.
In other words, if you’re in automotive, we’ll give you what you expect in automotive. We’re also going to challenge you in a way that’s productive (not so wild it’s meaningless) but also in a way that hopefully will steer you in a different direction.
And that’s where ethics comes in isn’t it because what if I’m steering you in the wrong direction? This is one of the things we’re going to see with megacities (huge, ginormous populations, very dense spaces) is this new question of whether or not we even care about the environment outside.
Right now we can say ‘we’re doing a better job of recycling—we’ve divided out trash for the next week’ but at the scale of a billion people in a city it’s immaterial how much you can sort that. It’s going to be much more about dramatically rethinking the way we do things.
Can AI be built in an ethical enough way that it can make that a meaningful factor in its decision-making? If it can, would we ever listen to it? It’s a very big, different question.
Darren:
The ethics centre has built an online portal that takes you through a series of thought exercises to actually help you make ethical decisions. As a group there was a conversation (CEOs of major organisations)—just the simple definition of ethics; people were saying ‘it’s making the tough decisions’ or ‘doing the right thing’, which is moral rather than an ethical.
Jared:
That’s a good point.
Darren:
The definition that was provided and I’ve now adopted this as my working definition of ethics is doing the least amount of harm to the least number of people. It’s based on the premise that every decision you make has potential for harm.
Jared:
It’s an interesting idea.
Darren:
From that direction how then do I consider (and that’s what this decision-making exercise takes you through) making sure that it frames your decision, taking into consideration all stakeholders, the environment, every aspect you should consider in making that decision. It seems to me this is the type of thing you could build into an AI from the basis of every recommendation comes with a harm attached to it.
Jared:
To take an example from insurance, if you’re familiar with actuarials and the way insurance works.
Darren:
I love actuarial.
Jared:
It’s a fun way to spend your afternoon. One of the key ideas is that insurance is based not just on potential harm but also on utility. In other words, if I have an insurance policy (insuring drones for example) well the first question is ‘does the drone have a purpose or value in the first place?’
If there is a 1 trillionth chance that that drone might hurt a hair on my head; if there is no utility to it there is no point in having the drone and there is no insurance.
Darren:
That’s a great example. I was sitting in Zurich on a rooftop garden 2 months ago and this drone was flying overhead and I went ‘those damn drones—I’d love to shoot it out of the sky’. And my friend, who lives in Zurich, said ‘that’s the one the hospital uses to send blood samples to the university because it’s the fastest way of getting across town’. And I felt so bad because I was thinking ‘drones –aarrggh’ but there you go–utility.
Jared:
Utility is it so to go back to the morality or ethics question.
Darren:
If I’d shot it out of the sky I would have felt mortified.
Jared:
You would have indeed but in other words there is potential harm in a drone so if you ran that equation as doing the least harm but you did not do the maths against the possible utility of it then you are missing the point.
So, we have to teach AI to understand utility as a concept and understand the space between it. If you said to any normal human being I’ll give you a trillion trillion dollars to hurt this other person, hopefully they would say ‘no’. It’s the idea that we understand utility and that utility can only be balanced in a very oblique way against what we consider to be harm.
In the same way that drone was probably not releasing hydrocarbons but it certainly required energy and it did pose some threat if it fell down by accident or hit somebody but the utility is pretty obvious. What happens when it’s not? How do we train AI in that space?
Darren:
That’s true but isn’t that part of AI; the ability to constantly learn, draw on the past to consider the present, and project into the future?
Jared:
Hopefully so but going to that website and hitting shall I do this or that? How do you measure that with utility? What is the value of utility? One of the challenges that is very hard to cognitively accept and it’s where machine learning really comes in, is what happens when it’s multivariate?
So, if the utility is over here in health but the challenge or detriment is in money, those are incomparable. It’s immaterial how much it costs if I can save a live.
Darren:
I do have to clarify; it’s not a decision tree. It doesn’t go yes or no; it actually sets up a series of conversations so that people can consider as close to holistically as possible.
Jared:
So, you built Jiminy Cricket.
Darren:
Yeah, that’s it. So, what type of things will be required to move from Q Insights to being able to get the AI CEO?
Jared:
It’s a valuable step but in a way the real story for an AI CMO is about connecting entirety of your organisation to what you’re doing. One of our advisors is a woman named Indra Nooyi—an amazing woman—used to be the CEO of Pepsi.
One of the stories she tells, which I think is really powerful, is that she was shocked when she realised she had no control as a CEO of her own schedule. She had 2 years in advance plans for meetings, outings, and even breaks because the organisation is such a complex entity that you not only have to plan around a million people’s schedules but you also have to think through the consequences of meeting a person for a certain amount of time.
So, if you imagine the complexity of that organism and then put AI and machine learning to work at getting into each piece of it then you have some real value. The first thing is to digitally transform these organisations in a way that every important piece of data or variable is met with AI. That’s the first step.
What Sparks and Honey are doing now with outside culture is easy because it manifests itself.
Darren:
One of the problems for these big organisations is that they’re still built on a traditional organisational structure that commenced back with the Roman legions—they were the first ones to organise into silos—battalions was the army version.
How can you then understand the limitations of that silo when you’re starting to connect them with technology?
Jared:
It’s a great question. It’s really about evolution. When you think about how a small company evolves to grow bigger that’s where those silos come from. I’ll never forget the first time I heard myself say ‘we should have a form for that’. You want to make this decision, purchase this thing, I realise 10 people have goofed it up; I guess we need a form for that.
It’s when you hear yourself saying TPS report that you suddenly say ‘organisations naturally evolve to become more complex because there is a bunch of safeguards. What you need to do to de-silo is invest in trust. You need to put in resources that help you make choices that you don’t fear the possibility of somebody breaking something.
And then after you’ve invested in that trust you need to devolve all of those systems that represented a lack of trust. Think of HP and that story of Hewlett unlocking the closet doors for the technology. That’s the idea that you move away from those silos.
Darren:
It’s much easier to build from scratch than it is to devolve isn’t it?
Jared:
Of course it is. That’s why Start-ups have it so easy until they don’t.
Darren:
But then we see a lot of major Start-ups end up with that traditional structure. This is what you were saying about people naturally evolving into this without actually thinking about the way of doing it.
Jared:
Absolutely and that’s where the AI CMO will have an advantage because the AI might be able to take some of the stress away from some of those difficult processes.
Darren:
And also for the CEO. I can imagine that to go into PepsiCo and have an AI that’s organising or at least collecting the data from the organisation, sorting it and giving options.
Jared:
It’s the future, sir.
Darren:
So, it actually provides efficiency.
Jared:
That’s the idea.
Darren:
And allows people to get older and wiser to take on decision-making where their biases and filters won’t be as prevalent as perhaps they are now.
Jared:
And learn Chinese at the same time.
Darren:
That would be an absolute miracle. This has been a terrific conversation; thank you, Jared.
Jared:
It’s been a real pleasure. Thank you so much for helping me. Every time you have a chance to talk to somebody new you have an opportunity to rethink these big ideas from their perspective.
Darren:
I do have one last question. Do you think we’ll ever have an AI president?
Ideal for marketers, advertisers, media and commercial communications professionals, Managing Marketing is a podcast hosted by Darren Woolley and special guests. Find all the episodes here