IBIS for all of your Strategic Environmental Analysis

Call us on +27 (83) 3777843 or send IBIS an e-mail

The new Crayon State of CI report is out. There were a few interesting outcomes. In summary (you can read the full report here https://www.crayon.co/blog/new-data-competitive-intelligence-increases-revenue) and this is wow for me, a professional intelligence practitioner, is that CI programs are growing fast, and have become more important to their companies than ever before. Whereas in 2020, 58% of businesses indicated that they had CI teams of two or more dedicated professionals, that figure rose to 70% — a 21% year-over-year increase in the number of businesses with CI teams of two or more people. Crayon says: “There is no better indication of importance within a company than the dedication of additional headcount and resources to a certain department — especially in the midst of the COVID-19 pandemic and the corresponding economic hardship. By rapidly growing their CI teams and budgets, companies are shouting from the rooftops that they know competitive intelligence to be an increasingly important investment. ITWeb report that we are all data scientists now. “News programmes have been obsessing about data in a way that's never happened before, and data is more than ever in the public eye. We are also generating lots of data today, and are aiming towards gaining wisdom and knowledge that can be used to heal the situation which is has been so difficult for so many people, during this COVID-19 pandemic.: The founder of Data Relish, Jen Stirrup speaks of an “infodemic.” So how can we make better sense of the truth in a world swamped by data and information, disinformation and misinformation – what tools are there to help us to navigate. CI and AI certainly rank up there amongst the best. Do invest in this in whichever shape of form your company can afford to do.

Despite the major challenges brought about by the COVID-19 pandemic – not least of which, several rounds of bans on the sale and distribution of alcohol – 2020 has given the wine industry some valuable lessons to chew over.

In this episode, Jono le Feuvre sits down with Vinimark Export and Marketing Director, Geoff Harvey to unpack some of the most interesting wine retail and buying trends that emerged from an all-round unprecedented year.

Listen to the podcast.

We are at the start of another rather unpredictable year. 2020 has left nobody unscathed. 2021 does not hold any promise of more certainty. Emily Dumas writes on her informative Competitive Intelligence blog that businesses do not function in vacuums.

“Higher fares, fewer routes, pre-flight health checks and less free food: The coronavirus pandemic is ushering in a new era of air travel” (Bloomberg, 2020).

Interview with Jeff Hawkins, neuroscientist and tech entrepreneur as publihes in Technology Review (MIT) 3 March 2021

Neuroscientist and tech entrepreneur Jeff Hawkins claims he’s figured out how intelligence works—and he wants every AI lab in the world to know about it. MIT Technology Review

Will Douglas Heaven
March 3, 2021

The search for AI has always been about trying to build machines that think—at least in some sense. But the question of how alike artificial and biological intelligence should be has divided opinion for decades. Early efforts to build AI involved decision-making processes and information storage systems that were loosely inspired by the way humans seemed to think. And today’s deep neural networks are loosely inspired by the way interconnected neurons fire in the brain. But loose inspiration is typically as far as it goes. A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. Most people in AI don’t care too much about the details, says Jeff Hawkins, a neuroscientist and tech entrepreneur. He wants to change that. Hawkins has straddled the two worlds of neuroscience and AI for nearly 40 years. In 1986, after a few years as a software engineer at Intel, he turned up at the University of California, Berkeley, to start a PhD in neuroscience, hoping to figure out how intelligence worked. But his ambition hit a wall when he was told there was nobody there to help him with such a big-picture project. Frustrated, he swapped Berkeley for Silicon Valley and in 1992 founded Palm Computing, which developed the PalmPilot—a precursor to today’s smartphones.

But his fascination with brains never went away. Fifteen years later, he returned to neuroscience and set up the Redwood Center for Theoretical Neuroscience (now at Berkeley). Today he runs Numenta, a neuroscience research company based in Silicon Valley. There he and his team study the neocortex, the part of the brain responsible for everything we associate with intelligence. After a string of breakthroughs in the last few years, Numenta changed its focus from brains to AI, applying what it has learned about biological intelligence to machines. Hawkins’s ideas have inspired big names in AI, including Andrew Ng, and drawn accolades from the likes of Richard Dawkins, who wrote an enthusiastic foreword to Hawkins’s new book A Thousand Brains: A New Theory of Intelligence, published March 2. I had a long chat with Hawkins on Zoom about what his research into human brains means for machine intelligence. He’s not the first Silicon Valley entrepreneur to think he has all the answers—and not everyone is likely to agree with his conclusions. But his ideas could shake up AI.

Why do you think AI is heading in the wrong direction at the moment?:

That’s a complicated question. Hey, I’m not a critic of today’s AI. I think it’s great; it’s useful. I just don’t think it’s intelligent. My main interest is brains. I fell in love with brains decades ago. I’ve had this attitude for a long time that before making AI, we first have to figure out what intelligence actually is, and the best way to do that is to study brains. Back in 1980, or something like that, I felt the approaches to AI were not going to lead to true intelligence. And I’ve felt the same through all the different phases of AI—it’s not a new thing for me. I look at the progress that has been made recently with deep learning and it’s dramatic, it’s pretty impressive—but that doesn’t take away from the fact that it’s fundamentally lacking. I think I know what intelligence is; I think I know how brains do it. And AI is not doing what brains do.

Are you saying that to build an AI we somehow need to re-create a brain?

No, I don’t think we’re going to build direct copies of brains. I’m not into brain emulation at all. But we’re going to need to build machines that work along similar principles. The only examples we have of intelligent systems are biological systems. Why wouldn’t you study that? It’s like I showed you a computer for the first time and you say, “That’s amazing! I’m going to build something like it.” But instead of looking at it, trying to figure out how it works, you just go away and start trying to make something from scratch. So what is it brains do that’s crucial to intelligence that you think AI needs to do too? There are four minimum attributes of intelligence, a kind of baseline. The first is learning by moving: we cannot sense everything around us at once. We have to move to build up a mental model of things, even if it’s only moving our eyes or hands. This is called embodiment. Next, this sensory input gets taken up by tens of thousands of cortical columns, each with a partial picture of the world. They compete and combine via a sort of voting system to build up an overall viewpoint. That’s the thousand brains idea. In an AI system, this could involve a machine controlling different sensors—vision, touch, radar and so on—to get a more complete model of the world. Although, there will typically be many cortical columns for each sense, such as vision. Then there’s continuous learning, where you learn new things without forgetting previous stuff. Today’s AI systems can’t do this. And finally, we structure knowledge using reference frames, which means that our knowledge of the world is relative to our point of view. If I slide my finger up the edge of my coffee cup, I can predict that I’ll feel its rim, because I know where my hand is in relation to the cup. Your lab has recently shifted from neuroscience to AI. Does that correspond to your thousand brains theory coming together? Pretty much. Up until two years ago, if you walked into our office, it was all neuroscience. Then we made the transition. We felt we’d learned enough about the brain to start applying it to AI.

What kinds of AI work are you doing?

One of the first things we looked at was sparsity. At any one time, only 2% of our neurons are firing; the activity is sparse. We’ve been applying this idea to deep-learning networks and we’re getting dramatic results, like 50 times speed-ups on existing networks. Sparsity also gives you more robust networks, lower power consumption. Now we’re working on continuous learning. It’s interesting that you include movement as a baseline for intelligence. Does that mean an AI needs a body? Does it need to be a robot? In the future I think the distinction between AI and robotics will disappear. But right now I prefer the word “embodiment,” because when you talk about robots it conjures up images of humanlike robots, which isn’t what I’m talking about. The key thing is that the AI will have to have sensors and be able to move them relative to itself and the things it’s modeling. But you could also have a virtual AI that moves in the internet. This idea is quite different from a lot of popular ideas about intelligence, of a disembodied brain. Movement is really interesting. The brain uses the same mechanisms to move my finger over a coffee cup, or move my eyes, or even when you’re thinking about a conceptual problem. Your brain moves through reference frames to recall facts that it has stored in different locations. The key thing is that any intelligent system, no matter what its physical form, learns a model of the world by sensing different parts of it, by moving in it. That’s bedrock; you can’t get away from that. Whether it looks like a humanoid robot, a snake robot, a car, an airplane, or, you know, just a computer sitting on your desk scooting around the internet—they’re all the same.

How do most AI researchers feel about these ideas?

The vast majority of AI researchers don’t really embrace the idea that the brain is important. I mean, yes, people figured out neural networks a while ago, and they’re kind of inspired by the brain. But most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough. Thirty years ago, Hinton’s belief in neural networks was contrarian. Now it’s hard to find anyone who disagrees, he says. And most people in AI have very little understanding of neuroscience. It’s not surprising, because it’s really hard. It’s not something you just sit down and spend a couple of days reading about. Neuroscience itself has been struggling to understand what the hell’s going on in the brain. But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. Do we accept them? Do we disagree? That hasn’t really been possible before. I mean, this brain research is less than five years old. I’m hoping it’ll be a real turning point.

How do you see these conversations changing AI research?

As a field, AI has lacked a definition of what intelligence is. You know, the Turing test is one of the worst things that ever happened, in my opinion. Even today, we still focus so much on benchmarks and clever tricks. I’m not trying to say it’s not useful. An AI that can detect cancer cells is great. But is that intelligence? No. In the book I use the example of robots on Mars building a habitat for humans. Try to imagine what kind of AI is required to do that. Is that possible? It’s totally possible. I think at the end of the century, we will have machines like that. The question is how do we get away from, like, “Here’s another trick” to the fundamentals needed to build the future.

What did Turing get wrong when he started the conversation about machine intelligence?

I just mean that if you go back and read his original work, he was basically trying to get people to stop arguing with him about whether you could build an intelligent machine. He was like, “Here’s some stuff to think about—stop bothering me.” But the problem is that it’s focused on a task. Can a machine do something a human can do? And that has been extended to all the goals we set for AI. So playing Go was a great achievement for AI. Really? [laughs] I mean, okay. The problem with all performance-based metrics, and the Turing test is one of them, is that it just avoids the conversation or the big question about what an intelligent system is. If you can trick somebody, if you can solve a task with some sort of clever engineering, then you’ve achieved that benchmark, but you haven’t necessarily made any progress toward a deeper understanding of what it means to be intelligent. Is the focus on humanlike achievement a problem too? I think in the future, many intelligent machines will not do anything that humans do. Many will be very simple and small—you know, just like a mouse or a cat. So focusing on language and human experience and all this stuff to pass the Turing test is kind of irrelevant to building an intelligent machine. It’s relevant if you want to build a humanlike machine, but I don’t think we always want to do that.

You tell a story in the book about pitching handheld computers to a boss at Intel who couldn’t see what they were for. So what will these future AIs do?

I don’t know. No one knows. But I have no doubt that we will find a gazillion useful things for intelligent machines to do, just like we’ve done for phones and computers. No one anticipated in the 1940s or 50s what computers would do. It’ll be the same with AI. It’ll be good. Some bad, but mostly good. But I prefer to think of this in the long term. Instead of asking “What’s the use of building intelligent machines?” I ask “What’s the purpose of life?” We live in a huge universe in which we are little dots of nothing. I’ve had this question mark in my head since I was a little kid. Why do we care about anything? Why are we doing all this? What should our goal be as a species? I think it’s not about preserving the gene pool: it’s about preserving knowledge. And if you think about it that way, intelligent machines are essential for that. We’re not going to be around forever, but our machines could be. I find it inspirational. I want a purpose to my life. I think AI—AI as I envision it, not today’s AI—is a way of essentially preserving ourselves for a time and a place we don’t yet know.


Crayon's Competitive Intelligence Spotlight is an interview series of chats with intelligence professionals to get a glimpse into their careers and gain unique insight into competitive strategy. In this edition of the Competitive Intelligence Spotlight Series, Paul Santilli, Worldwide OEM Industry Intelligence and Strategy at Hewlett Packard Enterprise talks about competitive strategy:

More Articles ...