Jash Bansidhar, Managing Director, Advantech Europe
q
It turns out ML algorithms need a dose of common sense and intuition when it comes to making good decisions, writes Zita Goldman
Nobel-laureate psychologist Daniel Kahneman has written a lot about the biases of the human mind. In his seminal book Thinking Fast and Slow, he presents readers with a typology of heuristics – or shortcuts – that people tend to take when making quick decisions. For example, we can often overestimate the frequency of a feature or the probability of an event if we can easily recall examples of them. The oft-cited thought experiment in Kahneman’s book involves a woman named Linda, who is “deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.” When asked whether it was more probable that Linda was (a) a bank teller, or (b) a bank teller active in the feminist movement, most people chose (b) – even though it was statistically less likely for Linda to be these two quite specific things, rather than just one of them.
The list of cognitive biases is ever expanding, yet also so intrinsic to our fast, intuitive and effortless cognitive system that, says Kahneman, the only way to keep them under control is to monitor them with slow, logical, energy-intensive processes and address any glaring anomalies. Kahneman found that our brains actually have two separate operating systems – a fast one and a slow one – which operate in tandem, with the second, though far less influential in ultimate decision making, acting as a brake on the dominant first.
But where does this fit into AI and machine learning? Those who trust technology more than humans believe that the most efficient way of eliminating the flaws in our thinking is to rely on disinterested, even-handed algorithms to make predictions and decisions, rather than inconsistent, prejudiced humans. But are the algorithms we use in artificial intelligence (AI) today really up to scratch? Or do machines have their own fallibilities when it comes to preconceptions?
Programmed to perform
Although there is general consensus that AI is designed to mimic human intelligence in machines, the aspects of human thinking that different models try to emulate can be different. Back in the 1980s, developers tried to imitate the human ability to both reason and make reasonable assumptions with the help of logic.
Later, in the 1990s, with the availability of torrential amounts of data and an explosion in computing power, machine learning (ML) stole the AI show. The drive to imitate humans’ symbolic, “slow” reasoning had fallen by the wayside.
The reason for this is that ML’s predictions are based on correlations observed among vast quantities of data. Its algorithms learn using the old-fashioned trial and error principle, altering the weight given to each piece of information fed into the system.
ML algorithms can be hugely successful when applied to closed systems – games such as chess and go for example. They can unlock the power of unstructured data and predict preferences and emerging trends. Their attention never flags, and they have no concept of things such as tiredness or emotion that could skew their decision making.
Where their limitation lies, however, is in transferability. In order to be robust, they need to be applied to the kind of data that they have been trained and tested on. Which brings us to the rise of machine reasoning (MR), a potential new AI trend that aims to create models of logical techniques such as induction and deduction. Machine learning is very good at doing highly specific things, but it can’t solve new problems, therefore is completely incapable of mimicking the flexibility of the human brain.
To borrow an analogy from Yoshua Bengio, a computer scientist renowned for his contribution to artificial neural networks and deep learning, if you train an ML algorithm in a room with the lights on, you’ll need to create a new one for the very same room if the lights go off.
Despite its potential to eliminate human cognitive flaws, in recent years, we’ve seen several examples in recruitment, credit scoring and criminal courts of how human prejudice can be baked into machine learning software as an unintended consequence.
But these algorithms have their own innate biases too, the most typical being “overfitting”. As digital technology writer Adam Greenfield explains using a car-based analogy, if all the Chevrolet Camaros shown to an algorithm designed to distinguish between three specific car brands happen to be red, it will erroneously “think” that redness is a definitive feature of a Camaro, rather than an independent variable.
Algorithmic bias may result in examples such as the one quoted in a study by Tom Taulli, author of Artificial Intelligence Basics. Machine learning software used at a hospital to predict the risk of death from pneumonia, for example, arrived at the conclusion that patients with asthma were less likely to die from it than patients without – a rather counterintuitive conclusion that common sense can easily overrule. (The algorithm didn’t account for the fact that people with asthma typically receive faster and more intensive care, hence the lower mortality rate in the training data.)
But current ML algorithms don’t have a common sense – or “slow” – system to detect anomalies or reflect on and deal with their own biases. (Although Cyc, an ongoing 36-year-old project aiming at creating a “common sense engine”, may still come to fruition one day.) It’s still left to flawed humans to detect ML algorithms’ racial, gender and programming biases or, in many cases, learn about them the hard way.
The occasionally scandalous failures of autonomous decision-making by machine learning without any or insufficient human control serve as constant reminders that the technology can only assist and augment human decisions until reasoning and a certain “common sense” is embedded into the system. This, however, doesn’t take away from the merits of ML and the boost the technology can give to RoI through recommendations, opinion mining or personal modelling – areas allowing a more generous margin for error. Overstretching ML’s capabilities will underwhelm users and thwart adoption. But making the most of what it really excels at will build trust towards it.
Jash Bansidhar, Managing Director, Advantech Europe
You’d be forgiven for assuming that a global crisis such as what we have been experiencing last year might result in a slowing pace of innovation, as companies diverted resources to short-term operations and survival. But in fact, in many instances the opposite has been true. The Covid-19 pandemic has precipitated a myriad of innovations designed to help governments, companies and individuals cope with what everyone is now calling “the new normal”.
In the area of smart cities, for example, existing community co-creation cultures and government funding have enabled rapid technology introductions such as Covid-19 apps, for example.
Advantech’s IoT and solution integration expertise has played a key role in supporting this, but our involvement of innovation across Manufacturing Industry and Smart City applications.
We have also supported customers across a number of commercial sectors to drive their own innovation implementations. This has taken place across key technology areas such as artificial intelligence (AI), 5G and edge computing, strengthening our own market-leading position in industrial internet of things (IIoT) computing and devices.
The planned roll-out of 5G communications networks has gathered pace rapidly to meet the requirements for more rapid connectivity and digital co-operation in response to the increase in home working and also the more rapid implementation of technologies such as robotisation.
Meanwhile, companies are harnessing the power of analytics to make their operations faster, smarter and leaner. At the same time, AI increases insight and enables more responsive customer service and communication.
Central to many areas of increased innovation has been Advantech’s strategy of “co-creation” – collaboration with leading partners across the world, bringing together the best of both companies’ technologies to deliver solution-ready platforms for specific customer applications.
Working with Robovision, for example, we launched a no-code software platform which allows users to exploit AI through computer vision technology. Manufacturers can detect and classify defects in real time, automate previously human-driven processes, and so enjoy real benefits in productivity and safety.
Elsewhere, our partnership with BrainCreators has delivered an intelligent automation platform for visual asset inspection and monitoring. Not only does this enable real-time asset identification and initiation of required actions, it enables knowledge to be retained within a company through the employment of AI.
As Charles Darwin said, “It is not the strongest of the species that survives, nor the most intelligent; it is the one most adaptable to change.” That is what Advantech has always striven to be, and through working with partners with the same ethos, we are supporting an increased pace of innovation across multiple sectors and applications.
Click here to get in touch and learn more about Advantech’s industrial IoT solutions!
Adam Cousin, Technical Business Development Manager, Telit Communications
5G provides a host of sophisticated new capabilities to enable applications that wouldn’t be practical or even possible with 4G, 3G or 2G. To leverage those features, IoT systems designers need to understand the three essential components of the 5G New Radio (NR) architecture:
• Enhanced Mobile Broadband (eMBB) is ideal for bandwidth-intensive fixed and mobile applications. One example is providing gigabit broadband to homes where fibre or copper is expensive, too slow or simply unavailable, such as low population density areas.
• Massive machine-type communications (mMTC) are not bandwidth or latency-critical, but they form the backbone of the majority of IoT deployments like smart city or smart agriculture. The mMTC infrastructure is designed to support millions of IoT sensor systems within a given location. By comparison, a 4G network could support thousands or maybe tens of thousands in the same area.
• Ultra-reliable low-latency communication (URLLC) is designed to meet extreme quality-of-service (QoS) requirements for mission-critical applications. One example is single-digit-millisecond latency for telemedicine, Industry 4.0 and autonomous vehicles.
In the short term, availability is the biggest challenge to leveraging 5G’s capabilities. Although many operators have begun deploying 5G, they still have years to go before it matches their 4G coverage. More standards work must also be completed and then implemented. For example, 3GPP Release 16 (Rel 16), which enables URLLC, is still a year or two away from real-world deployments.
Many of 5G’s capabilities have their foundation in LTE. One example is mMTC, whose framework is found in 3GPP Rel 13 by way of NB-IoT and LTE-M. Even before the advancement of 5G, the LTE infrastructure around these LPWA technologies continues to evolve. Device makers who want to leverage applications against these services at scale must cater for the probability of infield firmware updates and device management driven by the advancements of the global MNOs, mitigate against costly truck rolls, and offer continued quality of service throughout this Long-Term Evolution.
Looking forward, NR-Light is also a relevant element of 5G that the standardisation community is working on for the upcoming 3GPP Rel 17. NR-Light aims to fulfil the needs of low- and mid-tier devices, which are more demanding than NB-IoT and LTE-M but less demanding than URLLC or eMBB and served by LTE, paving the way for extensive adoption of 5G in all use cases.
IoT systems developers must also navigate 5G’s additional complexity. For example, the way 5G can accommodate the magnitude of connected devices is by its expanded RF capabilities compared to 4G, 3G and 2G – from 600 MHz to the uncharted territory of millimetre wave (mmWave) – and often multiple bands simultaneously, such as interband carrier aggregation to achieve multi-gigabit downlinks. Systems designers need to consider how their band choices affect in-building coverage, antenna design, mobile operator certification and more.
Many IoT applications will continue to use LTE because it’s a mature technology that’s widely available. Still, even LTE has its share of complexity that can trip up systems designers accustomed to 2G and 3G. That’s why they often turn to a partner to help navigate their 4G and 5G options.
Take secure connection to cloud services, for example, which is critical for many IoT applications. NB-IoT, a technology limited in throughput with potentially long latency, does not work well with legacy heavyweight TCP protocols demanded by enterprise cloud services, such as SSL or HTTPS.
Telit enables those IoT applications to enjoy NB-IoT’s benefits, overcoming such obstacles using secure lightweight protocols. Telit modems are born with the ability to connect securely using Lightweight M2M (LwM2M) to Telit’s intermediary OneEdge™ cloud and device management service, which includes API connectors to push application data directly to third-party enterprise cloud service providers.
Besides helping customers sidestep the signalling limitations of new technologies, Telit also provides connectivity (SIM cards) and device management tools, such as over-the-air firmware campaigns. This comprehensive portfolio, provided by a single company, enables IoT solution providers to avoid the problems – including finger-pointing – that arise when they cobble together multi-vendor systems independently. That’s critical for being successful in the brave new world of 5G.
Learn how 5G is creating endless new opportunities across industries
Francesco Biondi, University of Windsor
With self-driving cars gaining traction in today’s automobile landscape, the issue of legal liability in the case of an accident has become more relevant.
Research in human-vehicle interaction has shown time and again that even systems designed to automate driving — like adaptive cruise control, which maintains the vehicle at a certain speed and distance from the car ahead — are far from being error-proof.
Recent evidence points to drivers’ limited understanding of what these systems can and cannot do (also known as mental models) as a contributing factor to system misuse.
There are many issues troubling the world of self-driving cars including the less-than-perfect technology and lukewarm public acceptance of autonomous systems. There is also the question of legal liabilities. In particular, what are the legal responsibilities of the human driver and the car maker that built the self-driving car?
Trust and accountability
In a recent study published in Humanities and Social Science Communications, the authors tackle the issue of over-trusting drivers and the resulting system misuse from a legal viewpoint. They look at what the manufacturers of self-driving cars should legally do to ensure that drivers understand how to use the vehicles appropriately.
One solution suggested in the study involves requiring buyers to sign end-user licence agreements (EULAs), similar to the terms and conditions that require agreement when using new computer or software products. To obtain consent, manufacturers might employ the omnipresent touchscreen, which comes installed in most new vehicles.
The issue is that this is far from being ideal, or even safe. And the interface may not provide enough information to the driver, leading to confusion about the nature of the requests for agreement and their implications.
The problem is, most end users don’t read EULAs: a 2017 Deloitte study shows that 91 per cent of people agree to them without reading. The percentage is even higher in young people, with 97 per cent agreeing without reviewing the terms.
Unlike using a smartphone app, operating a car has intrinsic and sizeable safety risks, whether the driver is human or software. Human drivers need to consent to take responsibility for the outcomes of the software and hardware.
“Warning fatigue” and distracted driving are also causes for concern. For example, a driver, annoyed after receiving continuous warnings, could decide to just ignore the message. Or, if the message is presented while the vehicle is in motion, it could represent a distraction.
Given these limitations and concerns, even if this mode of obtaining consent is to move forward, it likely won’t fully shield automakers from their legal liability should the system malfunction or an accident occur.
Driver training for self-driving vehicles can help ensure that drivers fully understand system capabilities and limitations. This needs to occur beyond the vehicle purchase — recent evidence shows that even relying on the information provided by the dealership is not going to answer many questions.
All of this considered, the road forward for self-driving cars is not going to be a smooth ride after all.
Francesco Biondi, Assistant Professor, Human Kinetics, University of Windsor
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Decisions are becoming increasingly more connected, more contextual and more continuous than ever before, with Gartner reporting that 65 per cent of business decisions made in 2020, were more complex than they were two years ago.[1]
As a result, this has exposed the vulnerabilities of the decision-making process and accelerated the need for the process to be re-engineered, in order to achieve dramatic improvements in critical, contemporary measures of performance, such as business value, cost, quality, service, and speed.
What do we mean by re-engineering the decision-making process?
Traditionally, the typology of decisions has followed a vertical alignment, from the strategic, to the tactical, to the operational. Strategic decisions drive tactical decisions, which in turn drive operational decisions. Conversely, operational feedback validates tactics, and tactical feedback validates strategy. Each type of decision has its own process, its own people, and its own dynamic.1
But this traditional view is becoming outdated as operational decisions are becoming far more contextual, and strategic decisions are becoming more continuous. The traditional categories are starting to share characteristics, so organisations are being forced to acknowledge the new landscape of the decision-making ecosystem, which now has to consider the inclusivity, flexibility, reliability, accuracy, transparency, personalisation, scalability and speed of the decisions they make.
How does this look in practice? Well, there are many business outcomes organisations will benefit from when they begin to rethink their decision-making processes in a more connected, contextual and continuous way.
These include, but are not limited to:
Whatever the business outcome, there is no refuting that decision-making is a core capability which every organisation, and everyone within that organisation, needs to master. Advancements in technology, specifically in AI, will have a huge role to play in achieving that capability, as the practices of decision intelligence and decision modelling become key competitive differentiators in an increasingly digitalised economy.
Demystifying AI decision making
The idea of relying on AI to make “human” decisions can be an uncomfortable concept for many people. AI has, however, already been widely used to simulate processes that humans had previously had a monopoly on.
Self-driving cars rely on new technology that involves making decisions that were formally down to humans using a series of algorithms – when to accelerate, brake, turn and what speed to travel at.
This type of decision making can be translated into everyday business, and tends to fall into two categories: logical and emotional.
Logical decisions
Logical decisions are usually based on indisputable data, and an outcome is reached using a series of predefined rules.
Previously human processes such as insurance premium calculations and loan approvals are now done by AI. Rather than having to speak to someone at a bank, customers can input their information into a calculator on the website and have the decision made far more efficiently.
In these types of process, AI provides speed and consistency and requires little manual intervention. Self-driving cars are examples of logical decision-making carried out by AI – they follow the pre-defined rules of the road and execute subsequent tasks according to those rules.
Emotional decisions
When it comes to using AI to make human decisions, the process is based more around decision support and the provision of insights and information.
Large companies such as Netflix, Amazon and Spotify make use of recommendation engines. The AI used does not make decisions for the customer but presents the customer with choices and a potential outcome.
AI allows humans to come to a decision by exploring the possibilities, and acts as a guiding hand rather than intervening.
Steps to take
To small and medium-sized businesses with a low level of data and analytics maturity, it may seem a distant dream to leverage AI effectively to make humanlike decisions. This sense of overhyping AI is a sign of low maturity and can paralyse an organisation to making initial steps.
According to McKinsey, “Two-thirds of the opportunities to use AI are in improving the performance of existing analytics use cases.”[2] AI analytics takes existing analytic methods and pushes them to the next level. AI can work faster, combine more data sources and process vast quantities to uncover patterns that were once undetectable.
To really succeed in leveraging AI an organisation must understand its desired business goals. Identifying the organisation’s AI ambition is integral to realising where to invest money and time. Across the board, the organisation needs to align its goals to achieve effective AI analytics integration.
This will cause decision making to become more data-driven and built on solid analytical practices, rather than solely on high-level executive hunches and gut instinct.
As well as deciding on the desired business outcomes to come from AI analytics, awareness throughout the organisation needs to be achieved. If the value of AI analytics implementation is realised and individuals across departments are upskilled, then the fear of AI will decrease and become normalised. This will allow the analytic insights from AI to be more readily trusted across the organisation.
From this point, the use of AI can be stabilised and eventually expanded to a point where it is widely applied throughout the organisation. The aim is to make the use of AI as commonplace as using computers. The more advanced the analytics, the more nuanced the insights will be.
AI and human decision making should complement rather than conflict with each other – ambition, awareness and widespread implementation can help attain this.
Taking advantage of AI analytics and scaling up the use of it will give organisations an edge over competition.
The key to succeeding with implementation is to remember that the relationship between human and AI decision making must be synergetic. With decision making becoming more complex, the practices used to reach outcomes must be as well.
If organisations want to effectively improve their decision-making process, they shouldn’t start with technology. Take a people and business-first approach, focusing on the desired outcomes, before figuring out the steps to get there.
James Don-Carolis, Managing Director, TrueCue Tim Archer, Director of Analytics, TrueCue
1. The Future of Data and Analytics: Reengineering the Decision. Gartner, 2020
2. Notes from the AI Frontier: Applications and value of deep learning. McKinsey, 2018
Anand Vaidya, San José State University
In the “Star Trek: The Next Generation” episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?
The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.
Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.
As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.
Two flavors of intelligence and a test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.
On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski and raise children – tasks that are related, but also very different.
Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OPENAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural-language-processing system, trained to read and write so that it can be easily understood by people.
It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.
Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.
Two kinds of consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.
In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.
Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.
Data is an android. How do these distinctions play out with respect to him?
The Data dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.
Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.
He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.
However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.
Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.
For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.
In the episode, the question ends up resting not on whether Data is self-aware – that is not in doubt. Nor is it in question whether he is intelligent – he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.
Should an AI get moral standing?
Data is kind – he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.
But what about Skynet in the “Terminator” movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?
Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.
There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs – whether kind and helpful like Data, or set on destruction, like Skynet.
Anand Vaidya, Associate Professor of Philosophy, San José State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image courtesy of GettyImages. Photo©Paramount Pictures.
The Internet of Things will drive digital transformation across industries if we take steps to protect it
More and more business leaders are realising the benefits of connected technologies by using the data that is being gathered by smart devices to optimise their operations. And yet the increase in adoption in individual organisations and even sectors is not what makes the growth of the Internet of Things (IoT) exciting.
The power of the IoT is in connecting billions of devices across all aspects of society, from our homes to our workplaces to our streets, because the intelligence that it will enable will be life-changing. The possibilities for innovators are almost endless, including running businesses with greater environmental efficiency, working more productively, making cities cleaner and safer, and monitoring medical conditions closer than ever before. The future looks exciting – but security cannot be optional.
Why security matters
People rely on technology and the data it generates, but many of the devices we interact with on a daily basis have not been designed with security in mind. Consider the implications of that: would you want to install a smart door lock that can only be trusted 95 per cent of the time? Or find that a weakness in a connected lightbulb in your office building could let hackers into your entire network?
These implications hold back future growth and market potential, as innovators need to consider the liability for services that are hacked, the damage to their brand and the impact on end-users. The potential impact of insecurity extends even further. Market growth is impacted by levels of consumer trust, and if the IoT is perceived by individuals and organisations as being susceptible to cyberattack, investment may be delayed or, in the worst case, its potential may never be realised.
The electronics industry: leading by example
We’ve unfortunately had a slow and fragmented start with IoT security, which has meant that many companies have not realised they have a responsibility when it comes to creating secure devices. Fortunately, things are beginning to change: in a recent survey of more than 600 technology decision makers by PSA Certified, we found that 90 per cent now believe IoT device security is important to their company both today and for five years’ time. Further, 85 per cent would be interested in industry collaboration to improve IoT security.
However, the challenge for device makers is that security can be complex, costly and time-consuming. In fact, when thinking about security, 42 per cent cited upfront costs as a top issue. This issue is multiplied by the changing regulatory landscape, with 48 per cent of decision makers believing that the fragmentation of standards and regulation is a top challenge.
To address these concerns, companies from across the electronics industry have joined forces to support the development of a common security framework and assurance scheme. PSA Certified helps device makers put security at the heart of a device by drawing on the knowledge and experience of colleagues from across the ecosystem. We’ve worked hard with our partners to create some key deliverables, which include free guidance, reference open-source software and certified components that have been independently assessed as being secure. It is also underpinned by a multi-level certification programme that provides evidence of manufacturers’ compliance with a security baseline and gives businesses and consumers certainty over the steps that have been taken to keep their data safe.
PSA Certified’s founders have focused their attention on promoting an industry-wide approach to security, uniting the ecosystem to collaborate together and moving in a common direction, which is something we’ve not achieved before. If more organisations made a commitment to IoT security, the benefits would be seen not only by the industry but by society as a whole.
IoT product developers will have to consider security from the outset if we are to achieve that. Here too, change is on the horizon. 93 per cent of technology decision makers now believe that security can differentiate their products from competitive offerings, with 82 per cent prepared to spend continuously on security tools and resources.
Perhaps we are finally starting to see securing the IoT as an unmissable opportunity?
David Maidment is the Director of Secure Devices Ecosystem at Arm. To find out more about the industry’s views on, and approach to, IoT security download the PSA Certified Security Report 2021, Bridging the Gap here.
Sam George, Corporate Vice President, Azure IoT
When we were asked to discuss our vision for the future of IoT on Business Reporter, we were excited to share everything we’ve just learned from our annual IoT Signals report, covering the trends, challenges and focus areas of more than 3,000 business, technical and developer decision makers from around the world.
For businesses of all sizes, 2020 has proven challenging to navigate in the face of unpredictable changes across all industries. However, for many business decision-makers it has actually accelerated their work towards a connected and secured future. If your business is already harnessing IoT, you likely believe it is critical to your long-term success. You are not alone. We learned from IoT Signals edition two that 90 per cent of decision makers now believe IoT is critical to their company’s success, up from 88 per cent just last year.
On our daily Microsoft Teams calls with customers and partners, we often say that it feels like a year’s worth of digital transformation is happening each and every month in 2020. For IoT, that transformation is providing near real-time visibility into physical assets and environments, enabling increased efficiency, reduced downtime, and keeping employees safer as they return to physical workplaces. One of the top reasons for IoT adoption is safety and security, with 47 per cent of businesses citing it as a main focus for the technology, on par with optimising operations.
When we look ahead to the next two years, two out of three organisations are planning to increase their use of IoT to solve pressing business challenges – from connecting and securing physical environments and returning to highly productive workplaces to enabling real-time visibility into supply chains, and thousands of other scenarios. We’ve also seen a shift in companies moving from simply connecting assets in silos to connecting entire environments – the factories, supply chains, distribution network and partner networks for unparalleled visibility to drive operational efficiencies. This shift from connected assets to connected environments is the defining theme of our vision for the future, and the way companies will gain the most benefit from their value chain.
We seek to uncover the current and future trends of IoT to better serve our partners and customers around the world to develop their own IoT strategies. We hope you use this knowledge, and connections to thousands of global partners, to step into your future. Our mission is to be the IoT partner you’ll choose again, and to make the benefits of IoT accessible to every business, large or small.
IoT Signals – Industrial IoT Trends and Solutions | Microsoft Azure
Brian Wilson, Sales Director, Advantech
The Covid-19 pandemic has impacted all sectors hugely. In the area of edge cloud, enterprise networks and the internet of things (IoT), it has provided further impetus to an already growing trend towards home working.
This no longer applies solely to functions such as remote sales teams – a far broader spectrum of employee roles now operates from home offices, and so requires safe, effective and reliable network communication.
This presents challenges to service and technology providers to deliver solutions which combine agility with the required levels of performance and reliability. Given the growth in edge computing in particular – where computers are located close to the machinery they control, rather than in remote data centres offering ideal ambient conditions – the need for robust design that’s able to withstand harsh environments while requiring minimal maintenance over extended periods is key.
While many off-the-shelf solutions are available, the suitability of these for customer applications is no accident. Painstaking design of both hardware and software by leading players in the sector such as Advantech ensures that a variety of standard systems are available, which between them can deliver optimum compatibility across multiple IoT and network edge applications.
Finally, given the desire to rationalise supplier numbers wherever possible, it is vital that suppliers can provide truly scalable solutions, offering optimal performance across small, medium and higher-end platforms.
Alongside all of this, technology suppliers must be able to offer the flexibility to support the specific requirements of all customers. While off-the-shelf solutions may meet many requirements, around half of Advantech’s customers require some degree of customisation, for example, in relation to branding or the inclusion of specific feature sets. An example might be a “white box” solution, whereby an Advantech appliance is bundled with an SD-WAN software package from one of our partners, to create a validated, solution-ready platform.
At the very top end, we are able to design and produce complete, peer-customised original design manufacturer (ODM) models. One recent illustration was a remote production streaming solution for use in broadcasting to allow coverage of news and sports with minimal physical presence of broadcast teams at the venue.
The pace of development and innovation is only set to quicken and new applications and requirements will continue to emerge. As a global supplier with a local focus, Advantech is ideally placed to support customers across all IoT sectors in their journeys.
Click here to get in touch and learn more about Advantech’s 5G Edge solutions.
The coronavirus pandemic has deeply impacted people in communities across the globe, sadly with significant loss of life. As the number of those recovering from the virus continues to grow, we also recognise that our world has been changed forever.
Getting basic provisions such as foods and medicines has been a major challenge for many people during the crisis, with many retailers struggling to top up their shelves and customers seemingly panic buying and stockpiling products, from paracetamol to toilet rolls.
Supporting customers and retailers throughout have been multiple groups of wholesalers, distributors, manufacturers and suppliers collaborating as players within global supply chains.
Supply chains are fascinating, dynamic and exciting – highly sophisticated, multi-layered, interconnected and interrelated distribution systems which enable companies and countries to balance supply and demand and trade more efficiently. Globally, supply chains have led a relatively settled existence for the last 50 years or so. All that is about to change, not least as a result of the crisis.
The pandemic closed many factories across the globe, as companies acted swiftly to protect the health and wellbeing of their workers and respond to rapid reductions in their inventory and massive disruption to their supply lines and logistics networks across international borders.
Some were able to maintain, and in some cases increase, production – notably some food manufacturers. Others were able to switch some of their production lines to satisfy emergency needs, such as the manufacture of life-saving products and components for ventilating machines, and personal protection equipment (PPE) for health and social care workers.
One excellent example of this was the instant supply chain created by Ventilator Challenge UK, a consortium of 21 manufacturing engineering and seven Formula 1 racing firms, led by the UK government-backed High Value Manufacturing Catapult, delivering 10,000 ventilating machines to the NHS.
Recovery from the crisis will not be instantaneous for supply chains, as individual players and businesses within the chain emerge from what may well be a sustained period of inactivity. Retaining staff and skills will be vital.
Something which will be of great interest to the government once the crisis is over is the propensity of UK firms and overseas investors, to “right-shore” (also referred to as “onshoring” or “reshoring”) back to the UK manufacturing operations that were previously located abroad. Bringing supply chains geographically closer could be a significant step in the race to rebuild resilience, reduce carbon footprints and potentially increase revenues for the treasury.
Back on the shop floor, however, what every proprietor and manager will want to know is what we have learned from all of this. Whether you are an original equipment manufacturer (OEM), or tier 1 or lower supplier to that OEM, you may well be wondering how you engineer greater resilience, sustainability and value for your business in the future.
Industry 4.0 (altertnatively describe as digital manufacturing or Supply Chain 4.0) provides a large part of the solution enabled by an array of technologies such as the internet of things (IoT), robotics and automation, machine learning, 3D printing, artificial intelligence (AI) and augmented reality (AR).
At the Institution of Engineering and Technology (IET), we believe passionately in the creation and management of supply chain ecosystems for global growth “bounce-back” in a post-coronavirus world, backed by strong and secure digital infrastructure and driven by data.
We describe such an ecosystem as a dynamic environment composed of different elements interacting collaboratively to always ensure flexibility, resilience, responsiveness, transparency and traceability.
Each connected node within the supply chain contributes to the growth of the whole system, fostering a virtuous loop of benefits for the supply chain. This can make supply chains both sensitive to change but also more resilient to change, so long as that has been factored into their design.
A toolbox of these digital technologies and capabilities can help the redesign of processes and operations to build greater resilience, thus enabling the supply chain ecosystem to better adapt to shortages and surges in the future, and be equipped to pivot quickly and smoothly.
It can connect all players within the ecosystem, providing instant and open visibility for all. For manufacturers, distributors and suppliers such visibility will be crucial in enabling where and how much stock and value resides within the supply chain and what the gaps are.
Data gathered from across the ecosystem may be analysed against agreed key performance indicators and shared among the players. Respecting the privacy of those players who require it is becoming ever easier to put into practice, with the increasing maturity and adoption of blockchain technologies. These offer new ways of permanently recording transactions within a secure peer-to-peer network.
Supply chains have struggled to balance flow and demand during the crisis. As part of their collective response, manufacturers, distributors and those within supply chains will be giving some thought as to how things can be improved for next time. (Here’s hoping there isn’t a next time!)
Even before this is all over, think about how you can make a once-and-for-all change to benefit your business. Consider and review your digital toolbox. Design stronger dynamic resilience in your supply chain ecosystem and seek out collaborative help and expertise to build it and test it out. After all, resilience and collaboration are the two words that will dominate our business vocabulary and thinking from here on in!
For more on our vision for the creation of supply chain ecosystem visit our website and download our report, Developing An Eco-system for Supply Chain Success.
For more on the Industry 4.0 technologies referred to in this article have a look at our earlier Business Reporter article, “How to make more time and money from your manufacturing operation”.
by John Patsavellas, Institution of Engineering and Technology
How IT and OT could spark increased digital productivity
With the new industrial revolution that is “Industry 4.0” already underway, many different terminologies for technologies are being circulated. Information technology (IT) has, of course, technically been around since the days of the abacus, but a new term that’s being frequently mentioned is operational technology (OT) – the hardware and software that sits in the background, constantly monitoring systems.
Both have different functions, yet both IT and OT are essential contributors in ensuring the transition to the fourth industrial revolution is as seamless as possible. Nonetheless, with their functions being so different, there is a gap emerging between the two, and it is crucial that this is closed or bridged as soon as possible.
Historically, the functions of both IT and OT have been clearly defined, but as operational technologies are being brought online, with data analytics and higher connectivity, it is becoming progressively essential that the two converge.
To paraphrase the Peter Parker principle, with great connectivity comes great control. However, a supposed cause for the still existing gap between IT and OT is the fact that more connections and networked devices means a greater risk of security gaps, and new security breach scenarios are being created which could have detrimental effects on both. This risk must be dealt with by increased levels of cyber-security, which in turn also aids the shift into Industry 4.0.
In the most simplistic of descriptions, IT deals with the digital flow of information and OT with the operation of physical processes and the machines used to carry them out. It is important for IT to start “thinking” like OT and vice versa, and in order to work in parallel, the two must understand one another. The introduction of the industrial internet of things (IIoT) has created a mutual concern between the two in employee and customer safety, while maintaining control of their systems and machinery.
During a February 2020 roundtable, the GAMBICA Industrial Automation Council discussed ongoing advances in technology from participants’ own experiences, the most significant impacts of technology seen in the past decade, and what they expected to see in the decade ahead. While the importance of bridging the gap between IT and OT was a key topic of conversation it was agreed that technologies such as cloud and industrial edge are closing the gap, and that the flow of data is helping to break down the barriers between the two.
The gap between IT and OT certainly needs to be closed – during the discussion the idea that a change towards a smart factory also means a change in mindset within the factory itself took hold. Skillsets from both technologies need to co-exist and we need to establish how they can work together for this convergence to be successful.
Coinciding with this mindset alteration is a bridge between IT and OT being formed by more external factors. There is increasing demand, driven by customers, for controlling machines that use mobile phones, meaning machines are more “user-friendly”, for example, as well as requirements to achieve environmental targets such as net-zero plants. “Past industrial revolutions have all consisted of two changes,” observed one participant at the roundtable. “One is a technology change, but each time there has also been a societal change.” This is certainly key if Industry 4.0 is to transform the “if it’s not broken, don’t fix it” mentality to a “we can get a lot more for just a little bit more” ideology.
Being a master of IT and OT requires not only two different skillsets, but also a new way of thinking to see how the disciplines intersect, and specialists in both are currently rare. However, the emergence of the Industrial Internet of Things (IIoT) has set the stage for a union of these technologies, which is likely to unlock a new realm of competitive advantage in almost every industry.
by Nikesh Mistry, Sector Head – Industrial Automation, Gambica
Do you agree that we need to see a convergence of these skillsets? Have you seen similar bridges in your corporation? If you want to discover how similar companies feel, get in touch with us at www.gambica.org.uk to find out more.
TaeWoo Kim, University of Technology Sydney
Have you ever used Google Assistant, Apple’s Siri or Amazon Alexa to make decisions for you? Perhaps you asked it what new movies have good reviews, or to recommend a cool restaurant in your neighbourhood.
Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine.
Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks.
In several 2018 demonstrations, Google’s AI made haircut and restaurant reservations without receptionists realising they were talking with a non-human.
It’s likely the AI capabilities developed by tech giants such as Amazon and Google will only grow more capable of influencing us in the future.
But what do we actually find persuasive?
My colleague Adam Duhachek and I found AI messages are more persuasive when they highlight “how” an action should be performed, rather than “why”. For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.
We found people generally don’t believe a machine can understand human goals and desires. Take Google’s AlphaGo, an algorithm designed to play the board game Go. Few people would say the algorithm can understand why playing Go is fun, or why it’s meaningful to become a Go champion. Rather, it just follows a pre-programmed algorithm telling it how to move on the game board.
Our research suggests people find AI’s recommendations more persuasive in situations where AI shows easy steps on how to build personalised health insurance, how to avoid a lemon car, or how to choose the right tennis racket for you, rather than why any of these are important to do in a human sense.
Does AI have free will?
Most of us believe humans have free will. We compliment someone who helps others because we think they do it freely, and we penalise those who harm others. What’s more, we are willing to lessen the criminal penalty if the person was deprived of free will, for instance if they were in the grip of a schizophrenic delusion.
But do people think AI has free will? We did an experiment to find out.
Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair. Surely we should get $50, right?
Read more: Do social media algorithms erode our ability to make decisions freely? The jury is out
But what if the proposer is an AI? In a research project yet to be published, my colleagues and I found the rejection ratio drops significantly. In other words, people are much more likely to accept this “unfair” offer if proposed by an AI.
This is because we don’t think an AI developed to serve humans has a malicious intent to exploit us — it’s just an algorithm, it doesn’t have free will, so we might as well just accept the $20.
The fact people could accept unfair offers from AI concerns me, because it might mean this phenomenon could be used maliciously. For example, a mortgage loan company might try to charge unfairly high interest rates by framing the decision as being calculated by an algorithm. Or a manufacturing company might manipulate workers into accepting unfair wages by saying it was a decision made by a computer.
To protect consumers, we need to understand when people are vulnerable to manipulation by AI. Governments should take this into account when considering regulation of AI.
We’re surprisingly willing to divulge to AI
In other work yet to be published, my colleagues and I found people tend to disclose their personal information and embarrassing experiences more willingly to an AI than a human.
We told participants to imagine they’re at the doctor for a urinary tract infection. We split the participants, so half spoke to a human doctor, and half to an AI doctor. We told them the doctor is going to ask a few questions to find the best treatment and it’s up to you how much personal information you provide.
Participants disclosed more personal information to the AI doctor than the human one, regarding potentially embarrassing questions about use of sex toys, condoms, or other sexual activities. We found this was because people don’t think AI judges our behaviour, whereas humans do. Indeed, we asked participants how concerned they were for being negatively judged, and found the concern of being judged was the underlying mechanism determining how much they divulged.
It seems we feel less embarrassed when talking to AI. This is interesting because many people have grave concerns about AI and privacy, and yet we may be more willing to share our personal details with AI.
But what if AI does have free will?
We also studied the flipside: what happens when people start to believe AI does have free will? We found giving AI human-like features or a human name could mean people are more likely to believe an AI has free will.
This has several implications:
We are likely to see more and different types of AI and robots in future. They might cook, serve, sell us cars, tend to us at the hospital and even sit on a dining table as a dating partner. It’s important to understand how AI influences our decisions, so we can regulate AI to protect ourselves from possible harms.
TaeWoo Kim, Lecturer, UTS Business School, University of Technology Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.