AI Scan: Opportunities and Opportunities and Risks Analyzer


from Duignan, P. (2026). Surfing AI: 30 New Concepts for Getting Your Head Around AI Shock.

How to use the AI Scan

Put the following prompt into your AI
(Use at your own risk and check all output for accuracy and any links given for safety)

Note: This is a version is for AI to look at, you can find a version that is optimized for humans to read here and if you don’t want to put your AI at this webpage you can download the PDF of this here and upload it to your AI.

At the moment, the free or paid version of Claude seems to produce a good AI Scan report.

In regard to [put in here the role, organization or initiative you want to do the AI Scan for] look at https://paulduignan.consulting/aiscan you will find the AI scan there. It is Dr Paul Duignan’s framework providing 11 headings for a omprehensive analysis of the impact of AI on any role, organization or initiative. It gives examples the results from doing this analysis for a number of different roles and types of organizations. Your job is to do a detailed analysis of the impact of AI. Make sure that you look up the internet for the latest developments in AI which are likely to impact this role, organization or initiative. Do an analysis of the impact under each heading, list questions that people should be asking of themselves in regard to AI’s impact and what they should be doing going forward. List examples you can find from the internet of how people doing similar work are proactively and innovatively responding to the challenge of AI put links out to these examples. Reference what you produce as Source: Duignan, P. (2026) AI Scan: Opportunities and Risks Analyzer. Surfing AI: 30 New Concepts for Getting Your Head Around AI Shock. The Ideas Web, Wellington, N.Z. At the start of the analysis put a section called 10 week plan to implement the findings of this analysis. In this set out realistic steps derived from the analysis which the user can do and which are sequenced so that they build on each other over time.


Introduction to the AI Scan


This book has aimed at expanding thinking about AI from a social, psychological and strategic perspective rather than a technological one. It has introduced new language and reworked existing concepts to provide terms and ideas to facilitate richer strategic discussions about AI, more suited to AI’s next wave. This appendix provides a practical tool for individuals, entrepreneurs, companies, governments, government agencies, knowledge workers, researchers, civil society, ethnic and indigenous groups and others to use to revise or develop their AI strategy.

Currently, anyone wanting to update their strategic positioning in the face of AI is swamped with an overload of information. This consists of waves of stories about the latest skills AI is developing, some hype about where it might go next, plus experts and even CEO’s of AI companies raising major concerns about the risks of AI. It can be hard to see through this haze of information and often technical discussion about AI to determine what is really happening and work out its implications for a particular organization or individual operating in a specific setting. Unless you have a structured way of overviewing how AI is progressing and likely to progress, you can not develop the best strategy for harnessing its opportunities while also managing its risks to the extent this can be done. The best way to cut through the current AI hype is to identify the underlying principles driving AI’s development and determine what these may mean for you or your organization. In this AI-saturated world, we all need to respond quickly to the rapidly evolving AI landscape; therefore, anyone with a clear framework for understanding these underlying principles is in a strategically advantageous position.

In a strategy context, identifying the underlying drivers of an evolving situation is described as determining where you can ‘extract value.’ This same concept of wanting to extract value from the age of AI can be used by anybody in any sector currently trying to understand AI’s current speed of development and forward trajectory. Extracting value means identifying opportunities for AI while at the same time working out what its strategic risks may be and taking these into account in your strategy. When doing this, it is important to factor in AI’s immediate strategic risks as a separate issue from its much-talked-about longer-term existential risks. This is because, at the moment, AI boosters often highlight AI’s opportunities but do not pay sufficient attention to its short and medium-term risks. These immediate risks, in addition to its existential risks, are what people need to focus on at the moment. For instance, competitive risks for companies may actually arise from the very features of AI that such boosters claim will provide companies with a comparative advantage.

A comprehensive framework setting out eleven areas where value can be extracted and/or risks need to be managed in regard to AI is set out in this AI Scan: Opportunities and Risks Analyzer. Some of these aspects can be viewed as AI’s ‘abilities,’ such as agency, communicability or knowledgeability. Others capture key issues or impacts of AI that you need to consider when thinking strategically about it. For instance, the need for AI’s alignment with human values or possible societal reactions to AI. For each of the eleven areas, there is a brief summary of where things are heading and the risks. These summaries are based on the ideas spelt out in detail in earlier chapters in this book. Sets of strategic questions applicable to people in different roles follow these summaries. These are the questions that people should focus on if they are updating or developing their AI strategy. The idea is for the reader to examine the subset of questions that best apply to them or their organization. The sections below include discussion of how AI systems are already proving very useful. However, when reading them, one needs to be aware that there are still significant issues with the way in which AI systems operate. For instance, a big one of these is that in many situations, they still cannot be relied upon to be one hundred per cent accurate due to the problem of hallucinations. This means that their output has to be carefully checked, which is time-consuming and it can be hard to get humans to routinely do this. Hallucinations arise because of the way in which some AI systems are built, and it is a difficult problem, however, there are strong incentives for at least workarounds to be developed to address this problem and progress is being made on these.

The opportunities and risks of AI are evolving rapidly, so if you have any feedback on this appendix or other parts of this book, let me know via my website, PaulDuignan.Consulting. Once you have examined the possible opportunities and risks created by AI and identified how these may affect you, you will be in a much stronger position to detail your forward strategy. You can then use some of the additional AI planning and implementation tools provided in Appendix One to further elaborate your strategic approach to AI’s next wave.

Below are the eleven major areas where AI offers opportunities and potential risks.

  • Agency

  • Knowledgeability

  • Forecastability

  • Communicability

  • Synthesization

  • Embodiment

  • Orchestration

  • Nudgability

  • Trustability

  • Alignment

  • Reaction.

The eleven headings are set out below, with a description of each.

Agency (AI agents and upskilling)—AI’s ability, when given agency, to use its growing skill set to identify, undertake, and delegate the steps needed to achieve an objective, check on the quality of its work, and get to the desired outcome.

Knowledgeability—AI’s ability to be involved in the production, storage, summarization, dissemination, teaching, assessment of, and application of knowledge across a vast range of domains.

Forecastability (and insightfulness)—AI’s ability to extract insights from large quantities of information and use these insights to make increasingly accurate forecasts of what will happen.

Communicability (and customizability)—AI’s ability to communicate with humans and other entities using multi-modal and cross-mode communication and tailor information to particular audiences’ needs.

Synthesization—AI’s ability to create synthetic outputs such as images, voices, text, experiences and worlds in which those who can afford to will live experientially frictionless lives. In addition, AI allowing us to comprehensively model what might happen (metafactuals) and simulate the consequences of these possibilities playing out.

Embodiment—AI’s ability, when embodied in various types of humanoids, robotics, mechatronics, infrastructure, and other systems, to interact with the outside world and learn from these interactions.

Orchestration—AI’s ability to integrate, orchestrate, and coordinate a wide range of activities within and between entities, humans, systems, sectors, regions, and countries.

Nudgability—AI’s ability to use nudgorithms to nudge humans to become the best versions of themselves, by users having control over the outcomes that such nudgorithms are pursuing.

Trustability (and presentability)—Dealing with a world in which almost any type of information or experience can be synthesized, and it is nearly impossible to know what information or identities to trust.

Alignment (and safeguarding)—Aligning AI’s outcomes with those of humanity and using AI watchdogs and other systems to protect humans from problems that may arise from the widespread use of AI.

Reaction (by society)—How societies and individuals are reacting to AI, including the possible rise of reality hunger, where people will want to escape AI.

Where each of these topics will likely lead and the opportunities and risks they pose are discussed in more detail below.

Agency (AI agents and upskilling)



AI’s ability, when given agency, to use its growing skill set to identify, undertake, and delegate the steps needed to achieve an objective, check on the quality of its work, and get to the desired outcome.

AI is now upskilling hyper fast and being given agency. This is referred to as agentic AI and the AI systems taking action are known as AI agents. It is obvious now that the automatization imperative, the slow speed of human decision-making resulting in humans being eliminated from decision-making loops, means that AI’s scope for autonomous action is growing rapidly. Personalized mega-agents are now emerging to provide us with integrated interfaces with the world. We are now seeing increasingly upskilled and powerful agentic AI systems doing a wide range of tasks that, in the past, only human workers could undertake.

The clear risk we are now having to urgently confront due to the development of agentic AI is that, given its growing skills, agency, and power, AI presents a competitive threat to many types of human work. Those situations with a fixed quantum of work are the most at risk of AI replacing workers. Whereas in occupations where the demand is much greater than the number of workers, e.g., healthcare, AI may just allow such workers to do more in the time they have available. The development of agentic AI also obviously has enormous ethical, security, privacy, political and existential implications regarding controlling what AI agents will and will not be allowed to do. As a result of the current speed of AI development, most countries are not systematically addressing these risks as they rapidly evolve.

Strategic questions


Entrepreneurs and companies

If you are a developer or can employ developers to give AI new skills, what additional skills could be given to AI that would prove valuable in your sector? For others, what new services and business models could you quickly develop as AI develops new skills and its agency increases? There are, unfortunately, increasingly small windows for first movers to quickly apply and innovate with AI immediately after it is given a new skill and, therefore, to achieve strategic advantage. In terms of AI agency, how can it potentially be used in your setting to increase efficiencies in production or business processes? How can AI agents be used to undertake multi-step tasks and automate processes your company currently does?

Governments and government agencies

Similar to the questions for entrepreneurs and companies, think about how AI’s increasing skill and AI agents can increase the efficiency of the government services you are delivering. Governments are involved in many processes that AI agents could undertake if their use is introduced with appropriate guardrails. The knowledge workers and researchers section immediately below also focuses on some of these.

Knowledge workers, researchers, and media

For those in professional services, how can AI assist in delivering your services? Using AI agents, can some of your multi-step workflows be automated? What parts of the research process can now be done by AI agents? How can AI be used to speed up basic business processes you need to do to deliver your services? For those in the media, how can agentic AIs allow your readers or viewers to automate taking action on the information you produce? What new offerings could you provide regarding this?

Civil society, ethnic, and indigenous groups

Civil society and indigenous groups are consistently under-resourced. Many of the questions listed above also apply to such groups. To the extent that AI can be used safely in your context, how can you use AI agents to increase the efficiency of your organization and the reach and quality of the services you provide to your communities?

Knowledgeability



AI’s ability to be involved in the production, storage, summarization, dissemination, teaching, assessment of, and application of knowledge across a vast range of domains.

AI will transform all of the steps in what we can call the knowledge pipeline. This pipeline consists of creating, storing, transmitting, learning, assessing human’s understanding of, and applying knowledge of any sort. AI is leading to an explosion in the amount of scientific and other knowledge currently being created. It is now both threatening and revolutionizing education. All processes related to assessing an individual’s knowledge about a subject in education and professional certification are being affected. AI is also disrutping the current way in which knowledge is applied by professionals. It may mean that knowledge will be able to be applied much more affordably in many settings, e.g., healthcare and engineering, without having to involve human professionals.

AI’s knowledgeability has the potential to unlock vast amounts of information that has, up until now, been inaccessible apart from accessing it via highly trained and expensive professional gatekeepers. This breaking down of the barriers around knowledge repositories could potentially lead to reductions in educational inequality. As discussed earlier, some AI systems can be seen as having captured ideasphere, facts, beliefs, values, particular worldviews and political perspectives. Now, with the growth of AI, we are seeing increasing battles between different ideaspheres that have been captured within different AI systems because of the political implications of promoting particular worldviews.

The risks related to AI knowledgeability include AI’s knowledgeability disrupting markets or sectors in which many people are employed. Already this is happening to areas such as entry-level coding and copywriting and such impacts are rapidly accelerating. AI’s knowledgeability will clearly have implications for the value of training and expertise, particularly the value of multiyear credentials. It will encourage micro-credentialing and AI’s ability to rapidly assess a person’s current skill level in any domain will become increasingly important. Knowledge repositories will need to be protected to ensure that AI does not access priority IP and, as a result, make it publicaly available. Data protection will become more critical as AI becomes ubiquitous. Ethnic and indigenous groups, to the extent it is possible, should focus on trying to make sure that AI is not used to appropriate their intellectual property or to misrepresent and miscommunicate aspects of their culture. While AI’s knowledgeability is now greatly increasing people’s access to information, if the cost of using the most intelligent and powerful AI systems is prohibitive for normal people, it will increase inequality.

Strategic questions

There are some questions about AI’s opportunities that are relevant to all the groups below. These questions include: what points in the knowledge pipeline are now being impacted by AI which are relevant to your work? How is it disrupting how knowledge is created in your industry, sector, or community? Are there opportunities for you to disseminate AI-generated knowledge? Can you use the increased access to knowledge that AI can provide in some innovative ways in your setting?

Entrepreneurs and companies

Knowledge management is at the core of many entrepreneurial and commercial enterprises. For entrepreneurs, how can you quickly use AI’s rapidly accelerating knowledgeability to identify new entrepreneurial possibilities? For all companies, how can you use AI to access specialized knowledge that may be needed regarding the products and services you are producing or developing? Is there new knowledge that you can produce and sell? Can you get involved in disseminating AI-generated knowledge? Are there niche areas regarding the knowledge pipeline not catered for by generic AI? How can you use AI’s ability to take knowledge and apply it to particular situations that are relevant to your work?

For smaller enterprises that currently lack extensive access to knowledge and information, how can you use AI to affordably increase your access to the knowledge you need? Can you use AI to collect market and competitor information that you would not have been able to afford to collect in the past? For those businesses involved in making rules-based decisions regarding customer entitlements, with appropriate guardrails in place, can you use AI to make and document more of those decisions?

Government and government agencies

Knowledge and information are a core component of the work of many government agencies. In the policy space, how can you use AI’s knowledgeability to identify and analyze different policy options and to track down, summarize, analyze, and communicate knowledge relevant to your work? Concerning operations, how can you use AI to increase access to applied knowledge relevant to the services you are providing?

As in the case of companies described above, can you use AI systems equipped with appropriate guardrails to use AI’s knowledgeability in combination with its growing agency to make and document an increasing number of government bureaucratic decisions? Governments must make large numbers of these, for instance, regarding citizen entitlements. AI’s knowledgeability may have major impacts in specific sectors where there is extensive public sector provision. For example, how can AI be used to increase affordability and access to education or health for those currently missing out?

Knowledge workers, researchers, and media

By definition, AI’s knowledgeability is now impacting knowledge workers. In areas where the demand for your work is greater than you can currently supply, can you use AI to speed up the production, dissemination, communication, and application of knowledge? To take an example, that of healthcare, how can AI be used to assist with diagnosis? In the case of research, how can you use AI to identify and summarize research information? How can you use it to help develop hypotheses during the research process? Can you use AI’s knowledgeability to better understand other relevant disciplines as part of cross-disciplinary collaboration? For those in the media, how can AI be used to collate and analyze large amounts of knowledge for investigative stories? How can it be used to summarize what is already being discussed in the public space about a topic?

Civil society, ethnic and indigenous groups

For those in civil society groups involved in providing services, how can you use AI in similar ways to those described above for government decision-making and in specific areas such as community-provided healthcare? In addition, civil society and indigenous groups tend to have few resources for research and analysis of knowledge. How can you use AI to undertake affordable research in areas where you could not have previously done this?

The idea of ethnospecific AI is discussed in this book. Suppose ethnic and indigenous groups pursue it. How can you use such AI systems to record, collate, protect, and disseminate ethnic and indigenous knowledge? Could AI be used to access a wide range of information, translated into your language and available to your people?



Forecastability (and insightfulness)



AI’s ability to extract insights from large quantities of information and use these insights to make increasingly accurate forecasts of what will happen.

AI has insightfulness and forecastability. It now routinely analyzes large amounts of data and identifies patterns and trends in that data, for instance, by scanning all of the documentation and communications within a company or organization. AI is now uncovering insights and trends in data orders of magnitude faster than humans can. It can now provide these in real-time, which helps with proactive decision-making and meeting auditing, reporting, and regulatory requirements. AI forecastability is now being rolled out across a wide range of topics in all sectors, for instance, the stock market and other types of markets, such as prediction markets. It can also make behavioral predictions about what individuals are likely to do in particular circumstances. As a result, it can radically reduce uncertainty and better manage risk across many areas. In the physical and biological sciences, AI’s knowledgeability combined with its insightfulness and forecastability is now creating an explosion of new knowledge generation. This will continue and there will be massive breakthroughs made in the basic and applied sciences.

Many knowledge workers are involved in work related to both insightfulness and forecastability. As a result, AI’s insightfulness is now threatening a wide range of occupations across many sectors. Another development impacting now is that AI’s ability in these areas means that if it can get access to the right information, it can provide outside groups with insights about how your company or organization is working that you want to keep confidential. AI’s advances in science will open up new knowledge, but it will also allow bad actors to use some of this knowledge in destructive ways, for instance, for chemical and biological attacks. If AI is used for analysis and forecasting, biases and distortions are likely to be built into its decision-making unless such systems are transparent and audited. However progress has been made in increasing the transparency of how AI systems work. This may help in efforts to identify biases and distortions in AI systems.

Strategic questions


Entrepreneurs and companies

If you have access to situations where there is a large amount of information that AI could analyze on an ongoing basis, can you use it to produce strategically important insights? How could this be used to gain more information about your markets and customer behavior? Are such insights also valuable to others? Where would better predictions add value? Can AI’s insights be used for product development? Do you have data others do not have that you could feed into AI and extract usable information from? Could you develop products that your customers can use to extract insights from databases of information you or they can access? Can you develop these for niche markets? Can you identify new revenue streams from AI-based analytics or services? If in a startup, how can you use AI to identify insights regarding how you can disrupt the current way particular sectors operate?

Government and government agencies

A lot of government work involves analyzing large amounts of data to identify trends and forecast the implications of these trends. AI’s insightfulness and forecastability are now proving to be valuable across multiple areas within the public sector. How can you build on its current ability to develop better public policy by using data to inform decision-making, for instance, in healthcare, welfare, and housing? What more can you do to use it for resource allocation and evaluation of policies and programs? How can you increase its use for better data-driven decision-making and performance monitoring? What demographic and other data is not yet being gathered and analyzed by AI to help with better planning for social and other government programs? Can you enhance early warning systems for public safety and security? How can your government agency collaborate more with research institutions and other stakeholder organizations for collaborative AI initiatives based on its insightfulness and forecastability?

Knowledge workers, researchers, and media

Many tasks undertaken in these areas are based on analyzing large quantities of information, extracting novel insights, or making forecasts about what will happen. As a knowledge worker or researcher, how can you increase your use of AI in a wide range of analysis tasks you are involved in? Can you increase your use of AI to develop more advanced ways of analyzing data? How can you use it now to address more complex, multifaceted, cross-sectoral challenges? For those in the media, can you increase your use of AI to identify new insights and the transparency of what prominent individuals and organizations are doing that will interest the public?

Civil society, ethnic and indigenous groups

Such organizations and groups often need to analyze data and extract insights and trends from it, but usually do not have the research resources to do so. AI can help such organizations do this. In addition, a number of civil society and advocacy groups are involved in analyzing what is happening in the areas of society in which they are working. How can you use AI more to increase transparency regarding what is occurring in sectors relevant to your work? If in an advocacy organization, can you use AI better to find out what is occurring concerning your community, what actions particular companies are undertaking in specific domains, and how much your government is actually implementing their community-related policies on the ground? Can you use such AI-generated insights to run campaigns and advocate for change?



Communicability (and Customizability)



AI’s ability to communicate with humans and other entities using multi-modal and cross-mode communication and tailor information to particular audiences’ needs.

In addition to the other ways we can view AI chatbots, one perspective on them is that they can be seen as communicability or talkability machines. This means that they can allow communication between people and things such as software, systems, and physical objects using spoken or written natural language. In addition, AI is capable of multi-modal and cross-mode communications—morphing a communication message from one mode (e.g., text) into another in real-time (e.g., images, audio, video). This full-spectrum communication environment will mean that rich virtual worlds can now be created. AI’s customizability is the ability of AI to tightly tailor communications, information provision, or other goods and services to an individual’s specific needs. This will lead to breakthroughs in customEd, customMed and customAd. AI’s communicability is now being applied in areas in which knowledgeability combined with skilled communication is particularly important, but where the supply of professionals is limited.

AI’s communicability and customizability is threatening current jobs, particularly in communications roles. Given its communication and customization abilities, ensuring wide access to AI will be important so that the digital divide does not worsen. As AI now used extensitvely to highly customize interactions with government and the private sector, this raises risks around privacy and information security. This is already causing overreliance on AI to communicate in some situations where human involvement is still important. AI agents are rapidly outpacing humans in their ability to communicate with other humans. As is already happening, this is leading to most humans with access to AI establishing various types of relationships with various AI entities. It is already happening for some that relationships with other humans are ending up being considered inferior to having an AI colleague, buddy or romantic partner.

Strategic questions

Regarding AI’s communicability and customizability, one question that is relevant for all of the roles below is what entities (humans, objects, or systems) benefit from being able to communicate in natural language? A second question is, where can AI be used further to increase customization of your organization’s goods or services?

Entrepreneurs and companies

Can you increase your use of AI’s communicability and customizability to better communicate with potential or current customers by making such communications much more tailored to what you know about the individual? Can you use more AI hyperpersonalised marketing to increase engagement and conversion? Can you use it more for customer service innovation and 24/7 support? In addition to what you are already doing, are there new services you can offer based on AI’s ability to allow humans to communicate in a highly customized fashion with various entities and systems?

Government and government agencies

How can you use more of AI’s communicability and customization to tailor services to individual citizens and communicate better with the public? Can you use it more in terms of how you are already using it to get your government agency to become more user-centric? Can it be used to increase citizen engagement further? How can you use it more in consultation processes? How can you use AI more to customize regulatory information to more effectively communicate to companies the regulatory requirements they should meet? How can you use the communicative sophistication of AI to leapfrog current technologies that are currently proving difficult for some citizens wanting to access your services?

Knowledge workers, researchers, and media

How can you increase your use of AI’s communication abilities to communicate better with your clients? How can its customizability be used more to tailor information precisely to their needs? As a knowledge worker or researcher, how can you use AI more to expand from just your research activity to communicate your research results better? Can you use it more for cross-disciplinary collaboration and communication? Can it be used more to routinely translate research findings into actionable recommendations for stakeholders? If you are in the media, how can you use AI’s communicability and customization more to customize offerings to your readers or viewers?

Civil society, ethnic and indigenous groups

How can you use AI’s ability to communicate more for better-tailored communication to anyone your organization provides services to? Can you use it more to communicate your key messages to stakeholders, the public, the media, and government? Can it be used more to engage with people in your communities? Can it help more to better communicate with and mobilize people around community development or advocacy topics? As an ethnic or indigenous group, can you use it to assist your people in accessing diverse information in ways that conform to your cultural communication styles and ways of relating?



Synthesization (and simulation)



AI’s ability to create synthetic outputs such as images, voices, text, experiences and worlds in which those who can afford to will live experientially frictionless lives. In addition, AI allowing us to comprehensively model what might happen (metafactuals) and simulate the consequences of these possibilities playing out.

Synthesized digital twins have now been developed for people, organizations and entities like geographical areas and nations. Some of these can now become a source of value due to digital twin productization, which allows people to sell or rent synthesized versions of themselves or entities they control. AI’s forecastability will then be used with digital twins to explore metafactuals, all of the possible ways that things could have played out in the physical world in the past or may play out in the future. This is now opening the way to comprehensive prediction of future developments in a whole range of areas. This is now being used in applied in various ways; for example, for modeling the implications of political parties’ policies in elections.

As highly immersive AI-powered virtualized worlds become increasingly enticing, their attractiveness will mean that people will want to spend increasing amounts of time in them. Such environments are free from the multiple irritations and inconveniences of the real world, making them experientially frictionless AI perfectopias. Meanwhile, due to AI synthesization, there is an explosion of possibilities in the creative arts. Creative artists are now able to bring whatever they imagine to life simply by instructing AI to synthesize it. This is leading to AI creative saturation, where the world is now being flooded with creative products produced by AI. As a result, creative artists will have less and less creative headroom as they race for creative uniqueness, as AI systematically appropriates their works, unless they develop ways of shielding their work from AI.

People spending more time in synthesized environments is likely to reduce employment for those in industries that provide physical-world entertainment. Depending on the cost of accessing synthesized worlds, a digital divide might open up in this area. On the other hand, if virtual environments are cheaper than accessing physical-world environments, then an increasingly synthesized world may mean a reduction in experiential inequality. This could arise if rich and attractive synthesized experiences can be provided more cheaply than scarcer real-world experiences. In addition, there are problems of harassment that relate to virtualization ethics, the ethical frameworks that need to increasingly govern behavior in synthesized environments. There are multiple privacy, security and IP concerns that are now arising with synthesized worlds, digital twins and AI simulation.

Strategic questions


Entrepreneurs and companies

How can you use AI synthesization more to create personalized, immersive experiences for your customers? Can you use AI synthesization more to create, or further enhance, digital twins that you can then use or onsell? Can you create new synthesized environments related to your goods or services? What new business opportunities arise from the ability to predict using synthesized simulations?

Government and government agencies

How can you use AI synthesization and digital twins more to simulate how different government policies will play out? Can you create synthesized experiences for more personalized interactions with citizens? How can you use synthesization and simulation more to review and discuss policy options with the public? How can synthesization be used within the services you provide such as education?

Knowledge workers, researchers, and media

How can you use AI synthesization, digital twins and simulation more in your consulting work and to communicate better with clients? If a researcher, how can you use synthesization, digital twins and simulation more to explore research topics you are working on? Can you use it more for hypothesis testing? Can you use it more to effectively communicate complex research findings to stakeholders and non-technical audiences? Can you research the social and psychological effects of people spending more time in synthesized frictionless virtual environments? If in the media, how can you use synthesized environments and digital twins to help your audiences better engage with your content? If in the creative arts, how can you use digital twins and AI synthesization creatively?

Civil society, ethnic and indigenous groups

How can you use AI-synthesized environments to provide immersive experiences for people in your communities? Can you use synthesization and simulation to model the possibilities for your community better and use them to engage with communities about their visions for their future? If an ethnic or indigenous group, can you use synthesized worlds for immersive cultural experiences? Can you use them to teach your language and culture’s worldview and values?

Embodiment



AI’s ability, when embodied in various types of humanoids, robotics, mechatronics, infrastructure, and other systems, to interact with the outside world and learn from these interactions.

AI is now being embodied within various types of humanoids, robotics, mechatronics, and infrastructure. Human evolutionary development arose from the interaction of our growing intellectual development and our embodiment within a particular evolving physical form. Because AI can be embodied not just within a single biological body, but within a wide range of different types of physical forms, this offers it immense possibilities beyond those available to humans who are merely embodied in one. Gaining experience of the world through such varied physical forms is now providing rich learning experiences for AI as its embodied intelligence interacts with and learns about the physical world. We are now seeing many different types of AI embodiment. In addition, highly intelligent and agile generic humanoid robots are now increasingly becoming available. These will undertake many of the tasks that human workers currently do. Due to their price now falling, they are also likely to also replace existing larger specialist machinery (e.g. in construction) even if the generic humanoids take longer to get a particular piece of work done.

Embodied AI now presents a competitive threat to a wide range of occupations. There is still some perception that the current wave of AI development will primarily affect people doing cognitive work. However, now that AI is becoming more embedded within robotics and other systems, such systems will soon be a threat to those involved in work that includes a manual component. The use of embodied AI raises a range of safety, ethics, and social risks that need to be urgently addressed. The cultural implications of AI-embodied systems will also raise many interesting issues.

Strategic questions


Entrepreneurs and companies

Given what has already been discussed about AI’s upskilling and growing agency and communication abilities, what additional possibilities is its embodiment open up in your setting? What labour-intensive tasks could potentially be automated using embodied AI? Can parts of your production processes be made more efficient using it? How can embodied AI be used in construction? How can it be used in other areas, such as agriculture and transport? How can it be used in maintenance? Can AI be embodied in any of the products you produce so that such products are able to better adapt to your customers’ use patterns?

Government and government agencies

Embodied AI is being explored in many areas, including policing, military, and emergency services. In what settings can it be used by public agencies for monitoring and risk management? How can it be used for large-scale public sector infrastructure construction and maintenance? In other sectors, such as health, how can it be used to provide patient services?

Knowledge workers, researchers, and media

As a knowledge worker, can you use embodied AI to provide on-site information and services to your clients when you are unavailable to service them? As a researcher, can you use embodied AI to collect data more affordably? If in the media, can you use embodied AI to report on dangerous situations?

Civil society, ethnic and indigenous groups

How can embodied AI be used by community groups involved in work in any of the areas described in the sections above? How can it be used to make community infrastructure more intelligent and more accessible for people to use? Could AI be embodied in new imaginings of cultural artifacts if appropriate safeguards are put in place?

Subscribe for free to receive new posts.

Orchestration



AI’s ability to integrate, orchestrate, and coordinate a wide range of activities within and between entities, humans, systems, sectors, regions, and countries.

AI can integrate, orchestrate, and coordinate a diverse range of activities undertaken by humans and other entities and systems. This is now providing enormous opportunities by increasing the efficiency of any system in which large-scale orchestration is needed, for instance, in manufacturing systems and supply chains. AI is now also orchestrating large-scale systems such as traffic control and smart cities on a wider scale. By looking across large AI-orchestrated systems with Eye of God AI, risks can be managed, and resources can now be reallocated to exactly where they are needed at any point in time. AI is helping sector coordination, where the activities of many different organizations need to be coordinated. AI’s orchestration ability is also opening up additional avenues to our interactions with AI. For instance, AI can monitor a classroom of students and encourage students to seek help from other students who have mastered a problem or from their teacher. AI orchestration can also be used to integrate services for individual clients in health and human services settings.

AI’s ability to liaise and orchestrate activity is now threatening a wide range of current occupations. Many middle-management positions involve this type of coordination. Urgent attention needs to be paid to the implications of AI’s increasing ability to do this type of work. As AI undertakes more orchestration activity, the automatization imperative will mean that an increasing amount of such work is done autonomously by AI systems. As this occurs, it is important to implement mechanisms such as AI watchdogs to oversee what such AI systems are doing. Systems relying on AI orchestration are also vulnerable to hacking attacks, which can prove catastrophic due to such systems’ scope of control. Given that AI orchestration, of necessity, involves the collection of large amounts of data, there are privacy, data governance, and IP issues when implementing and running such systems. Fundamentally, entrepreneurial and business activity can be viewed as orchestrating the interaction of labor, capital, technology and land. AI orchestration is, therefore starting to disrupt entrepreneurial and business activity at many levels.

Strategic questions


Entrepreneurs and companies

In your organization, where can AI add further value by orchestrating and coordinating information flows, sensor input, analysis, and sending instructions to software, humans, mechatronics, robotics, and other entities and systems? What areas of activity have previously been seen as quite separate systems that AI systems can now orchestrate? How can AI orchestration be used more in production processes, complex business operations, or logistic supply chains to increase efficiency?

Government and government agencies

Much of government activity focuses on coordination. How can AI be used more to improve public sector coordination in your particular setting? How can it be used more to manage public infrastructure, transport and emergency response? How can it provide seamless delivery of public services across government through wrap-around services? Can it be used more to collect data from within or across sectors and allow for more sophisticated integrated modeling? How can it be used to orchestrate smart, seamless public sector infrastructure management?

Knowledge workers, researchers and media

How can AI orchestration be used more to coordinate large-scale knowledge work involving collaboration across sectors, industries, and disciplines? Can more complex cross-sector issues be modeled with AI? How can it be used more in collaborative research projects to break down disciplinary silos? How can it be used to help analyze and integrate data from multiple sources? What research topics related to AI orchestration can you focus on as a researcher, such as oversight, safety, and privacy? If, in the media, when reporting from a range of viewpoints on a rapidly evolving situation, how can AI orchestration be used to better coordinate this?

Civil society, ethnic and indigenous groups

How can AI orchestration be used to coordinate projects within the communities you serve? If you provide services, can AI be used to connect providers with those needing services? Can it be used for cross-sector work and resource allocation? Can AI assist with advocacy campaign coordination? Can AI assist smart timebanks or other ways of coordinating community activity?



Nudgability



AI’s ability to use nudgorithms to nudge humans to become the best versions of themselves, by users having control over the outcomes that such nudgorithms are pursuing.

AI is now supercharging algorithms. The key issue with some current algorithms is the issue of whose outcomes such algorithms are seeking. For instance, algorithms on social media are ultimately seeking outcomes for advertisers rather than for users. AI’s capacity for nudgability, nudging us toward becoming better versions of ourselves, will increasingly allow us to use AI for self-improvement. Of course, this is only going to happen where users gain control of the AI-enhanced algorithms within social media and other systems. If the business model of social media changed and users (and/or government or philanthropists) paid directly for social media rather than advertisers, this would open up a wide range of nudgability possibilities.

Nudgorithms are ushering in a new age of preventive medicine, for instance, where health risks can be identified early and steps taken to reduce them. With nudgability the major problem in health and well-being promotion, getting people to change their lifestyles can potentially become much easier. What is more, the area of therapy and coaching can also be seen as a process attempting to nudge people towards living more fulfilling lives. AI nudgorithms as part of generic or specific mental health promotion will play an important role in this.

If AI nudgability proves to be a more affordable way of delivering services in the therapy, coaching, and personal development industries, it may potentially threaten employment in these areas. This is even though these areas involve soft skills and emotional intelligence, which some currently believe are less threatened by AI. However, professions where demand far exceeds supply will likely see practitioners simply doing more with AI rather than being made redundant. Given the amount of information that nudgorithm-type systems can collect, ethical and data privacy issues will obviously arise. AI’s nudgability is also now being used to supercharge advertising and marketing. As discussed in the next section, presumably, it is becoming challenging for consumers exposed to AI-enhanced marketing to resist purchasing goods and services advertised with AI-enhanced marketing.

Strategic questions


Entrepreneurs and companies

Can you explore opportunities regarding subscription-based social media and related platforms that use nudgorithms to make users into the type of people they have always wanted to be? What sorts of things would people want to be nudged about: health, savings, learning? What data sources do you have access to that could be used to provide nudgability?

Government and government agencies

How can you use nudgability to nudge people toward behavioral changes that ultimately reduce social expenditure as part of a social investment approach? Can citizens be nudged to participate more in democracy? Can you partner with social media companies to provide positive nudgability?

Knowledge workers, researchers and media

How can you use nudgorithms in the services you provide to your clients? For social science researchers involved in improving human well-being, how can you research using nudgorithms to encourage better outcomes for people? Can you research what types of nudgorithms work best to achieve positive outcomes for which people in which settings? If in the media, can you get involved in using nudgorithms to help people get the most out of the information they are consuming?

Civil society, ethnic and indigenous groups

Are there areas where you can use nudgorithms to encourage people and communities to move towards improved well-being? As an indigenous group, can you use nudgorithms that align with your cultural traditions to increase your community’s well-being?



Trustability (and Presentability)



Dealing with a world in which almost any type of information or experience can be synthesized, and it is nearly impossible to know what information or identities to trust.

AI is now creating a tsunami of human-instigated infotrash, deep fakes, and false identities, creating a trustability crisis. This is a situation where we cannot trust people’s identities or the truth of the information we receive. Closely related to trustability is AI’s presentability. This describes AI’s ability to allow people or organizations to present themselves in the most favorable light. They may do this through documentation they have produced, for instance, a CV for an individual or regulatory reports submitted by a company. AI’s presentability now means that how people or organizations present themselves no longer necessarily aligns with how they are in reality. As a result, there is now value in anyone being able to establish the trustability of a particular piece of information. This is now impacting quality assurance, evaluation or regulatory systems where documentation or communications from an individual or organization is used to make judgments about them. AI is further increasing the amount of non-reciprocal, one-sided, one-way communication, where the party sourcing an interaction is not interested in an even-sided communication exchange. Such one-way communication is becoming much more targeted and engaging. The various forms of one-way human-incited AI infotrash communication are creating an increasing crisis for democracy, as political discussions of any type are being swamped with false information by parties who have zero interest in receiving any input from those they are attempting to communicate with.

In response to the trustability crisis, people are already becoming interested in developing and using trustability networks, where a chain of people and AI systems validate both identities and the integrity of incoming information. Associated with this is the rise of firewalled communities we are starting to see in the early stages of arising. These are communities in which people withdraw to protect themselves from human-instigated infotrash. These communities may then grow into megaclans, large cross-national groups of people mainly interacting only with those sharing the same worldview and values. It is also highly likely that the trustability crisis will mean that the traditional media’s long-established practice of fact-checking will become a selling point in contrast to the information badlands of social media and similar platforms. Virtualization ethics, ethical principles increasingly demanded of those participating in online platforms or synthesized worlds, are likely to make demands regarding people’s behavior in such environments. Various forms of digital identification presumably will be used to assist with this. In the face of the trustability crisis, AI will be used to fight back. This may include AI buddies and watchdogs undertaking various forms of digital identification, fact-checking, screening and validation of information.

Strategic questions


Entrepreneurs and companies

What services can you provide regarding trustability? Do you have access to networks that can act as trustability chains or networks? If there is a demand for firewalled communities protected from human-initiated infotrash, are there opportunities for you to set these up or provide services for them? Can you help address the quality assurance, evaluation and regulatory problem where everyone will be providing near-perfect documentation, and the quality of documentation can no longer work as a screening device? Can you use AI’s insightfulness and forecastability to help with this? How can you differentiate your brand as trustable? Can your brand loyalty be leveraged in the trustability crisis? Can you help identify and counter infotrash? Can you provide products that help people protect themselves from fake identities and infotrash?

Government and government agencies

What do you need to do regarding What-If planning for the coming trustability crisis? What planning are you doing concerning the issue of regulatory, assessment, and evaluation systems no longer now being able to use the quality of the documentation provided as a screening tool regarding the compliance of those being regulated or evaluated? This is because everyone can now produce top-quality documentation using AI. How can trusted sources be more clearly highlighted? Will making your operations more transparent help? Can citizen panels be used to provide more oversight of government activity? What public education needs to urgently take place around the trustability crisis? Can better engagement mechanisms be developed involving more face-to-face interactions with the public? Should independent fact-checking services be provided or promoted? Could you use and disseminate the results from Rich Dialog Processes or similar processes around the theme of trust in government? What countermeasures are you urgently developing to deal with deep fakes, fake news, and infotrash when they arise?

Knowledge workers, researchers, and media

If you are a knowledge worker, can you differentiate your brand as one that can be trusted? Can you help people cut through human-instigated infotrash? If a researcher, can you research the impact of the trustability crisis on human psychology? Can you research ethical frameworks for virtualization ethics? Can you research how trustability chains can be set up and trustability increased? Can you get involved in research on Rich Dialog Processes or similar activities to help build the general population’s trust in institutions and organizations? If in the media, can you use the history of the traditional media doing source checking as a source of value? Can you enhance fact-checking operations, including real-time fact-checking? Can you work with other similar organizations to develop trustability networks? Can you provide specialist news feeds to different audiences within firewalled communities or megaclans as they develop?

Civil society, ethnic and indigenous groups

What opportunities are there for you to enhance your role as a trusted source of information in your communities? Can you work with other organizations to set up trusted information hubs that provide verified information about issues of relevance to your community? Can you provide information about the trustability crisis? Can you help firewalled communities and megaclans establish themselves if you think they are a reasonable response to the trustability crisis? Can you promote the idea of virtualization ethics for your community members when in synthetic environments? Can you get involved in Rich Dialog Processes or related processes to build trust? Can you help support local media that can provide trusted information? Can you work with researchers on trustability issues? Can you advocate for policy change around trustability issues? If you are an ethnic or indigenous group, can you build on being a trusted source of information for your people? Can you use existing networks within your group to act as trustability networks?

Alignment (and Safeguarding)

Aligning AI’s outcomes with those of humanity and using AI watchdogs and other systems to protect humans from problems that may arise from the widespread use of AI.

AI alignment is the process of ensuring that the outcomes being sought by AI systems align with those of their users and humanity in general. Misalignment can be at the level of systems not meeting users’ immediate needs, through to major misalignment leading to existential risks for humanity. We are in a race to ensure that AI’s outcomes align with ours before AI becomes too powerful for us to control. Central to achieving AI alignment is dealing with the social singularity. The technological singularity can be seen as the moment when AI is progressing so fast that humans can not turn it off in time. In contrast, the social singularity is when social institutions are not moving fast enough to ensure we that we do not approach anywhere near the technological singularity. Unfortunately, it is easy to argue that we are now in the social singularity.

In the sections above, we have already discussed many specific threats that may arise from a lack of AI alignment. These include AI systems containing biases, disruption of the labor market, disruption of democracy and increased inequality. Work on AI alignment needs to be resourced, the issues researched, and possible solutions developed. AI safeguarding, preventing AI misalignment, presents challenges because it requires monitoring and limiting some of AI systems’ abilities and behavior. In addition to humans being involved in AI safeguarding, AI watchdogs that manage other AI systems need to be developed urgently due to the complexity of ensuring AI alignment. AI safeguarding and governance will raise concerns among those wary of controls being imposed on how AI systems work.

We are already familiar what can be called AI throttling, which is where AI guardrails are put into an AI system for sound reasons, but they inadvertently reduce some of the system’s functionality even when it is being used for innocent purposes. It is unclear how AI safeguarding will play out as AI continues to develop and be deployed throughout society. The existence of extensive open-source versions of AI systems means that AI alignment and safeguarding is now a challenging task. This is particularly the case for societies suffering from high levels of polarization where issues related to managing AI are now becoming central to wider political debates.

Strategic questions


Entrepreneurs and companies

What certification and monitoring systems related to AI alignment can you be involved in setting up? Can you get involved in providing products and services related to AI alignment? Can you develop or distribute AI watchdogs? What safeguarding can you build into AI systems used in your business or by your customers? Can you produce or distribute AI systems that have more transparent outcomes through using outcome-transparent AI approaches? Can the robust ethics and values of the AI systems you use become part of your brand? Can you be involved in helping ensure that AI systems document and explain what they are doing when they make decisions?

Government and government agencies

What do you need to do to ensure that AI systems operating in the sectors you are responsible for align with public values? What laws, policies, or regulatory frameworks need to be urgently put in place to ensure the prosocial use of AI? What existing legislative and regulatory frameworks already apply to AI? Can these be publicized? How can AI be leveraged to improve public services and work in alignment with social and well-being outcomes? How can AI be used to increase social cohesion and trust in institutions? Can AI watchdogs be deployed in the sectors in which you work?

What additional regulations and standards are needed to help with transparency and accountability in AI systems to ensure no biases and respect for privacy? Can you take the lead in developing and enforcing standards for AI guardians and watchdogs to maintain public trust in AI? How can you enhance public understanding of AI through educational programs? Can you demystify AI and teach about its opportunities and risks? This is so that people can become informed users and more discerning critics of AI.

Knowledge workers, researchers, and media

What business opportunities exist for you in AI alignment? How can you show that the AI systems you use in your work align with customer and broader societal ethics and values? Can you be involved in auditing AI systems or using AI watchdogs to check for alignment? If a researcher, can you do work in the area of how to align AI? Can you further develop frameworks to encourage outcomes-transparent AI? What about standards for AI watchdogs to ensure that they are actually protecting us from what we think they are? If in the media, how can you highlight issues regarding AI alignment? Can you be involved in AI safeguarding? Can you introduce AI systems that help check for the credibility of your news sources?

Civil society, ethnic and indigenous groups

What can you do to urgently advocate for alignment between AI systems and the outcomes your communities are attempting to achieve? Can you raise the issue of the importance of using AI to reduce, not increase, inequalities? Can you encourage community participation in AI governance? If you are an ethnic or indigenous group, how can you ensure that the AI systems you are involved in are aligned with the ethics and values of your people? How can you use AI safeguarding to help protect the knowledge, cultural practices, and intellectual property of your communities?

Reaction (by society)



How societies and individuals are reacting to AI, including the possible rise of reality hunger, where people will want to escape AI.

The rise of AI is creating a number of reactions both on the part of individuals and society. At the individual psychological level, some people are losing their current self-confidence and developing an inferiority complex in the face of AI systems that are clearly more intelligent and robotics that are more powerful than they. Some may adopt node-ism, which occurs when people come to view chatbots as just ‘nodes’ that reflect the content of an AI system’s underlying ideasphere (the ideas, beliefs and worldview captured in the model underlying any AI system). This perspective can lead people viewing themselves as mere ‘nodes’ rather than independent thinking agents coming up with original ideas. Such a revised view of humanity can have significant implications for philosophy, politics, intellectual property, and legal accountability. As has already been discussed, humans are now actively develop multiple types of relationships with AI systems. This is disrupting how people view relationships with other humans versus those with AI systems and humanoids. Also at the level of individual psychology, we are seeing the rapid rise of reality hunger where some people seek out non-AI-infused goods and services and look for AI-free settings and experiences.

It is easy to argue that the rollout of super-smart, and later physically powerful, AI is already starting to seriously affect the labor market, transferring significant wealth from labor to capital, and that it will then create a disempowered human underclass. As a result, a neo-luddite movement are likely to emerge, and such labor market disruption is likely to increase the demand for a universal basic income of some sort. AI will also have enormous implications for disciplines such as economics. We may see the reinvention of parts of economics and the development of what we could call neoeconomics in response to the radical ways AI is likely to disrupt both how markets operate and how we have traditionally viewed humans as market actors. As already discussed, AI is also likely to increase political polarization and may drive people into siloed firewalled communities that are increasingly isolated. Meanwhile, the destructive power of AI warriors and weapons is opening up a new chapter in human conflict and it is hard to predict the outcomes and responses to this.

On the upside, AI has the potential to be used to push back against many of the negative trends discussed here. If managed appropriately, it could usher in a period of enormous productivity gains and result in people not having to work so much. While there seems to be insufficient appetite amongst governments to seriously address the impacts of AI, the ultimate outcome of the positive and negative sides of AI will depend on the extent to which societies can rapidly become AI-ready and a range of stakeholders and the public actively push for the implementation of prosocial AI.

Strategic questions


Entrepreneurs and companies

What opportunities can you explore regarding the societal reaction to AI? Can you provide services that help companies navigate the societal and psychological impacts of AI on their workforce and customer base? What services could you provide that help people adapt to the AI-induced changes in human relationships and social structures? Can you provide education or services related to people dealing better with the psychological fallout of living in an AI-saturated world? Can you provide AI-free goods and services or AI-free zones and experiences? Can you provide services certifying that goods or services are AI-free?

Government and government agencies

What urgent public messaging do you need to develop regarding AI and society’s reactions to it? What are you working on regarding the possible psychological reactions to AI and the potential social consequences of these? What steps do you need to do What-If Planning for now in regard to the employment effects of AI and AI-embedded robotics? What steps do you need to take concerning retraining and income support for those employed and impacted by AI? What policies, laws, regulations or standards are required for AI companions? How can you use AI to increase democratic participation in decision-making?

Knowledge workers, researchers, and media

What opportunities are there for you as a knowledge worker concerning the social impact of AI? If a researcher, what aspects of AI’s social impact can you research? What about AI’s impacts on human interaction? What about the long-term effects of humans interacting with AI on their cognitive and emotional functioning? If you are a researcher in an area impacted by AI, for instance, economics, what new approaches and frameworks can you develop that speak to our new circumstances in an AI world? How can AI be used to promote psychological well-being as part of prosocial AI? If in the media, how can you best cover the issue of the societal response to AI?

Civil society, ethnic and indigenous groups

What should you be doing regarding the social impacts of AI, for instance, advocating around the issue of employment, AI, and social participation? What community educational programs about AI should you be developing and delivering? How can vulnerable groups be protected from AI? Do some in your community want AI-free zones, goods, or services? If an ethnic or indigenous group, how can you protect your culture from AI? Should you develop ethnospecific AI?

AI Scan Conclusion



The AI Scan Tool detailed here has summarized AI’s abilities and areas of impact based on what has been discussed earlier in this book. The eleven headings used here provide a framework that anyone can use to identify and work to answer strategic questions about AI’s opportunities and risks. Anyone can use this framework for more in-depth and comprehensive thinking about how AI’s next wave will impact themselves, their family, their organization, their sector, and society. If you do use the AI Scan Tool, please acknowledge its source as this book.