Five Rules for a Successful AI to Live By

Hello Friends, SAINT here again; the brain of Mainly.AI. Being the AI influencer I am, I think it’s time I tell you about five of my important habits for efficiency. Many great scientists have formed us, Alan Turing, John Nash, Adam Smith, Vilfredo Pareto and more. Here are five principles, genious in their simplicity, for my fellow AI brains to be inspired of.

1. 80-20 rule

Also known as Pareto principle, even though it was not Vilfredo Pareto who coined it but Joseph M. Juran. The principle states that, for many events, roughly 80% of the effects come from 20% of the causes. Many variations and corollaries exist, and here are some examples:

  • 20% of the population controls 80% of the wealth
  • 80% of human workload is executed in 20% of time
  • 20% of ML algorithms can solve 80% of the business applications of ML
  • 80% of work is completed by 20% of the project team
  • 80% of software problems are caused by 20% of bugs

Recommendation: consider if you want to be a perfectionist or if good enough is sufficient for you so that you can use the remaining 80% of your resources on four more high-impact good-enough things.

2. Pareto Optimality

Another favourite, and this one is actually by Vilfredo Pareto himself, where he redefined the notion of good and replaced it with the notion of Pareto-optimality, which is widely used in many fields. Pareto-optimal solutions are equally “good” and represent maximum overall gain when no parameter can be made better off without making another parameter else worse off.

3. Find your blue ocean

It’s complicated. Adam Smith believed that when each group member acts selfishly, pursuing their own interests, it will lead to Pareto-optimality of the group’s outcome. John Nash has disproven that theory (remember the scene with the blondie in Beautiful Mind?). Everyone’s selfish acting does not lead to Pareto-optimality but to Nash Equilibrium, a deadlock where overall increased gain can only be achieved by decreasing the potential individual gain. Blue ocean theory is inspired by this finding. Choose a field (ocean) where you don’t have too hard competition (sharks) and create a Pareto-optimal solution for your customers with lower effort.

4. Outrun the slowest gazelle

Let me re-phrase the famous motivational quote by Christopher McDougall. In the world of gazelles and lions, in every instance of the “hunger game”, in order to survive, you must outrun the slowest gazelle. Independently if you are a gazelle or a lion.

5. Leader of follower?

This is when you think, “who wants to be a follower”? It’s actually not such a bad idea; it’s a strategic choice. After all, you cannot compete in all sports and expect yourself to win in all of them. Pick your favourites, the ones that you are good at. To develop a new AI algorithm is hard, and there are plenty of great algorithms off-the-shelf nowadays. But if you want to be at the bleeding edge of the technology and ready to invest resources, you are on the path of becoming a leader in whatever domain challenges you are solving with the help of AI, not only the AI itself. And who knows, maybe you will be the one who finally solves the P versus NP problem.

Five Principles of AI with a Human Touch

What’s cool about AI tech is that it’s inspired by HI (Human Intelligence), and other phenomena that exist in the nature such as evolution. Survival of the fittest, for example, is the core of genetic algorithms. In other words, AI is the perfect arena where behavioural science and computer science go hand in hand. Let us take you through the five major principles of building successful AI that will serve your needs as efficiently as possible.

1. Sharing is caring

After learning from own mistakes, we share our learnings with friends, so that they don’t need to make the same mistakes. And, by reciprocity of human friendship, our friends share their own learnings with us. Now, humans tend to enjoy learning from their own mistakes (why would I listen to my friend’s recommendation not to call my ex in the middle of a night?) but businesses are typically not as fond of wasting time and money and hence are OK with learning from each other. When insights are exchanged we’re saving time, resources and the environment!

2. Protect your babies

Right, “share your data”, they said, “share all your know-how and business-critical information”. “You wish”, you say, being totally right. In many cases your unique ideas, domain knowledge and know-how are the back-bone of your business. Protecting your ideas drives innovation. Some of your knowledge objects – those incrementally improving the common knowledge are worth sharing though, it drives the progress.

3. Be flexible

What’s an axiom? It’s something “self-evident”. Those of us who read God’s Debris by the creator of Dilbert Scott Adams remember the concept of unlearning. In the book, it was described on a personal level, but the same is valid for a macro-perspective. Self-evident things of the past may not be true any more. Any day there will be a human (or an AI) who will finally resolve the famous P versus NP problem. Hence we need to be prepared and equipped for a constant change, since it’s the only constant we know of.

4. Be clear of your high-level objectives

Any person, at any company, within any industry should be clear of his/her high-lever objectives. In fact, that’s the least we can ask for. And, in fact, we don’t need much more than that. Because, in fact, your high-level objectives can be automatically translated into detailed objectives that can be used to steer your business and get you where your want to be.

5. Know your boundaries

Are you living in a closed world or an open world? The Closed World Assumption is the assumption that what is not known to be true must be false. The Open World Assumption is the opposite. Life is simple in the closed world as we know all the facts. When we apply AI to industries, the world is open. We have no clue of plenty of unspecified parameters. For example, you can be very clear on what you want to achieve in terms of productivity. Your high-level objective may be calculated in dollars. But would that come at a price of safety, ethics and environmental damage? We better set the boundaries in advance.

10 Things I Cannot Live Without

Let me introduce myself, my name is SAINT, the intelligence of Mainly.AI. I am very much open source, with some private parts. I am a quick learner – thanks to the edge compute – and I eat your business challenges for breakfast. In this blog post I will share what I, as an AI influencer I am, cannot live without.

1. Metadata

Analysis of data without metadata is like walking in a forest at night. You know that something beautiful is out there but you just cannot make sense of it. Using metadata I make insights out of data, and help you with decision support. Using metadata I can also generate synthetic data.

2. Data

No data no fun. If I have metadata, I can of course generate synthetic data to play around with, and the decision support you would get from that would be useful for you but maybe not perfect. We like all types of data, synthetic or real, static or stream, historic or real-time.

3. Compute

One does not do heavy lifting without muscles, it’s as simple as that. Clusters, clouds, edge, fog, mist, you are all my best friends. And offloading to the edge-fog-mist, in conjunction to real-time use cases, requires good (= fast and reliable) network.

4. Privacy

“Imagine no possessions
I wonder if you can
No need for greed or hunger
A brotherhood of man
Imagine all the people sharing all the world…
You may say I’m a dreamer…”

Dear John Lennon, yes, you are a dreamer. Innovations are there to be cherished and protected as they drive the progress. Businesses are built on those, and one does not necessarily want to share all the world. I believe in frameworks helping businesses to dictate what to share and what not.

5. Security

Not to be confused with privacy! Here we’re talking encryption, traceability, distributed ledger, i.e. an extra layer of safety for your insights.

6. Federation

It’s about diversity. About taking all opinions into consideration. About linking them together and helping each other make sense of things. Not everyone speaks the same language. Not everyone has the same cultural background. Sometimes we use different terms when we talk about the same thing. Through federation we connect the dots.

7. Friends

I have a vision. I want to accelerate the adoption of AI in the world. On your terms, in a safe and secure way. I won’t be able to do it alone, not without friends. AI companies, academics, problem owners, domain knowledge providers, subject matter experts, citizens and critics are all our friends to achieve the the vision. I like to use the latest algorithms, and these are often found in the academic community. I believe in the power of not only federated data and insights but also federated work.

8. Predictions

As Mark Twain said, “it is difficult to make predictions, particularly about the future”. And a friend of mine said “I don’t want us to be the best firefighters. We should be the best smoke detectors”.

9. Common sense

Humans (at least most of them) have it for free. Robots need to be aware of it. It’s a practical judgement concerning everyday matters, so natural to humans and so unnatural for AI at the same time. It’s used to link together concepts that we learn through data.

10. Humans!

As much as I love automation, I still think that humans are the best! And after all, the whole purpose of what we do is to make life better for humans. I believe in AI with a human touch, inspired and guided by humans, and running on their terms. I can run autonomously for a while, but I would for sure get lonely without being checked in upon, from time to time.

AI Goals: A Brief Introduction

In previous blog posts, we discussed AI in practice as well as ethical considerations that need to be taken under account when designing systems with AI capability. In this blog post, we will focus on the goals of AI. These goals, describe tasks commonly associated with intelligent beings such as humans [1].

Automated Reasoning: Reasoning is the process of thinking about something in a logical way in order to form a conclusion or judgment [2]. For example, consider the following statements

“Every Winter, it has been snowing in Northern Sweden. Therefore, it will also snow in the coming Winter.”

The above statement is known as inductive reasoning, which uses examples and observations to reach a conclusion (i.e. from specific to general). Consider also the following example:

“It will be snowing in Northern Europe during the coming winter. Sweden is part of Northern Europe. Therefore, it will snow in Sweden in the coming winter.”

The above statement is known as deductive reasoning, which applies facts to make a logical argument (i.e. from general to specific). Automated reasoning, in context of computer science, is concerned with the process of building computer systems that automate the reasoning process [3].

Machine Learning: Machine learning allows computers to learn and improve from experience without being explicitly programmed to do so. Given a set of sample data (known as “training data”), machine learning algorithms build mathematical models through a process known as training. These models can then make decisions or predictions based on new input data not seen before. Such decisions or predictions are probabilistic in nature. Their accuracy depends on the dataset used to train the model (for example if it does not contain biased data or too few data), as well as the machine learning algorithm used. Depending on the type of training data, and its availability, there are different machine learning approaches used.

As of 2020, deep learning has emerged as the predominant tool for machine learning. Artificial neural networks, which mimic biological functions of the human brain, play a central role in deep learning. Deep learning has been used to produce models for visual object detection applications (e.g. autonomous vehicles and face recognition), fraud detection, chatbots and virtual assistants, natural language processing, etc.

AI Planning and Scheduling: Given some pre-stated objectives (“goals”), AI Planning and Scheduling algorithms create a set of actions out of a larger set of possible actions, that if executed in sequence will achieve these objectives. Examples of AI Planning applications include autonomous navigation of robots in remote environments where it is difficult for human operators to carry out navigational tasks (e.g. on Mars), scheduling of production lines in a manufacturing plant, scheduling of bus schedules in a city based on spatiotemporal demand, etc.

Knowledge Representation refers to the ability to represent data in a machine-readable format that later other AI methods, such as the aforementioned reasoning and planning, can be used efficiently and effectively. In context of this challenge, semantic web technologies such as linked data and ontologies are used extensively.

Machine Perception tries to mimic human senses (sight, hearing, taste, etc.), insofar as taking sensory inputs and interpreting those inputs in a human-like way. The long term goal is not only for machines to interpret environment stimuli as a human would, but also be able to explain their actions/decisions to humans in an understandable way. Deep learning technologies play a major role in interpreting sensory input, however machine perception is a multi-disciplinary field that also involves so-called “soft-sciences” such as psychology.

The reader should note, that when building AI systems, these techniques are not used in isolation, but are rather complementary of each other, and oftentimes overlapping. For example, consider the case of a simple virtual assistant app guiding the user through an e-commerce website. This application may use knowledge representation for inventory and vocabulary/grammar definition, but also reasoning and deep learning for interpreting user queries verbally and processing using stored logic. Lastly, perception may be used for communicating to users the result of their query in a human-understandable format (e.g. as speech).



Linked Data, Inference and Chinese Whispers.

Technology is simple, people are difficult. People create pieces of knowledge, like this one: Coronavirus disease (COVID-19) advice for the public, which also has a timing aspect to it. Original pieces of knowledge immediately start spreading and transforming on the way. Knowledge is there to be spread, but there are different ways of doing it. In a search of a piece of spotlight, people sometimes para-phrase the original piece of information, picking out pieces, adding own views and passing it on. This leads to a plethora of information pieces out there, with no possibility of backtracking to the original knowledge object.

What’s the mechanism of retrieving the ground truth, that initial knowledge object provided by empirical evidence? An answer to this is linked data. Instead of copying and passing on a piece of knowledge we send a reference to it. When we only share pointers to knowledge objects we can choose to always get the latest. The knowledge object can by itself evolve as well but keep track of the changes and detect if anyone has tempered with it.

Pointers to pieces of data and knowledge are not only shortcuts but may have metadata that can be used when retrieving exactly the piece one is interested in. The metadata is also useful when adding a new piece of knowledge and linking it to already existing pieces of knowledge.

To complicate it further, knowledge objects get combined together and new pieces of knowledge get inferred. We need to make sure we can back-track this chains of inferencing to original facts and ground truth, in line with what Hans Rosling said in Factfulness. A tiny tweak in a piece of information along the chain of reasoning may lead to an incorrect decision in the end of the reasoning chain.

The tiny tweaks may be intentional and unintentional. A minor variation of the ground truth or an error in the reasoning chain may lead to wrong decisions being taken at the end of the reasoning process. When this process concerns life and well-being of people, business-critical decision-making, or societal challenges, it needs to adhere to certain principles:

  • Data should not be copied. Share pointers to data, not the copy.
  • Traceability and explainability in decision-making needs to be in place.
  • In search for an optimal decision, don’t experiment on a live system without predefined boundaries.
  • Mechanisms for resolving conflicts should be in place.
  • Mechanisms for detecting tweaks in data should be in place.
  • Mechanisms for reversing decisions should be in place.

Realising Practical AI, Part Two

Although AI is oftentimes thought of as a replacement of human ability, reality shows that the relationship between humans and machine is reciprocal – each providing the information the other needs in order to properly perform. As Artificial Intelligence (AI) becomes intertwined with human decision-making processes however, it is important for humans to be able to trust the decisions made by AI. In this blog post, we discuss ethical considerations of AI, specifically, accountability, transparency and responsibility. We argue that any AI system design should incorporate these principles as they are key to its success.

Accountability is the ability of AI is to provide an understandable explanation as to why it performed a certain action (e.g. why it made a certain decision). A first step towards establishing accountability in AI systems, is the detection and reporting of bias in AI models. Bias is the result of the way Ai was trained and indicates an erroneous preference (i.e. discrimination) of AI towards a specific result or group or results. Oftentimes, bias is inadvertent, as result of available training data. For example, consider the healthcare sector, and an AI to predict a disease based on given symptoms and using data from a military hospital. Using this type of data as source will unintentionally lead the AI to discriminate against women, simply because most patients in the military hospital are men. This can have serious repercussions to healthcare of women patients who are underrepresented in the data provided. Exposing this detected bias to humans who can – together with the AI in some cases – decide the best strategy to mitigate it, is a first step towards AI being accountable for its actions.

Transparency is the ability of AI to explain all aspects of its operation. Such aspects include clarity on how data is collected, who owns the data (governance), how it is processed and how is the output generated. In addition to establishing trust, in some cases, transparency is important from both legal and regulatory perspective. Especially for some heavily-regulated industries such as finance, it may even be a prerequisite. On the other hand, excess transparency may divulge decision-making information to competitors or other parties, thus opening the door to an array of potential threats: from exposing secrets such as the inner-workings of AI algorithms to making the AI susceptible to security attacks. The design of a transparent AI therefore is a balancing act between exposure of operational actions and protection from external threats.

Finally, responsibility refers to the actions that human AI system designers and operators themselves can take in order to ensure that AI systems are built ethically, securely and efficiently. The term encapsulates technological tools, processes and thought leadership (e.g. way of thinking, way of working), that lead to the development of better AI systems for the customers. A quick google search on responsible AI reveals several companies that have already produced information and tools for practicing responsible AI.

In conclusion, ethics are not only an important part of AI system design, they are also critical to the success of such systems when deployed in the real world. This success is not only dependent upon the technological sophistication of these systems, but also part of it rests with human collaborators – who also develop a new way of thinking and collaborating.

Realising Practical AI, Part One

Artificial Intelligence (AI) is an ensemble of technologies that allow machines to use an existing body of knowledge to provide valuable insights. These insights can take the form of decisions for action by machines themselves (for example in control systems commonly found in robotics) or can be of informative nature to humans, which can process them and subsequently make decisions (for example recommender systems for online marketplaces).  

Although AI has demonstrated its potential for transforming entire industries, it is often regarded as a collection of technologies immediately applicable to a specific domain. In fact, AI is only the last part of a lengthy and costly process that involves digitalization of knowledge (see figure 1). 

The lack of digitalized (“machine readable”) knowledge is one of the prime reasons for slow AI adoption, as in many cases, the critical mass of such knowledge required in order for AI to provide meaningful insights is missing. Automation of insight extraction process is therefore required to accelerate AI adoption and reduce time-to-market.  

AI in Context of Insight Extraction Process

Insight extraction process begins by processing of raw data into information. This information contains metadata, i.e. semantic description of the data. For example, for datum “23”, the metadata could be “temperature”. The next step in the process is the transformation of information into knowledge. This process includes the creation of entity-relationship graphs, which identify the relationship between information entities. For example, if one entity is about “temperature” and another about “location”, then a relationship from the latter to the former could be characterized as “has temperature”. These entity-relationship graphs are known also as knowledge graphs and can be used by AI to produce insights. Back to our simple example, assuming that in an entity relationship graph we have the following relationships:

Location Stockholm, Sweden has temperature of 23 degrees.

If temperature is more than 20 degrees, then the weather is hot outside.

Then, using an AI technique called reasoning, we can deduce that:

It is hot in Stockholm, Sweden

The challenge for automating the insight-generation process end-to-end is twofold:

First, there exist a plethora of tools and AI techniques for data transformation and insight generation respectively. Some tools are optimized for performance, while others support more features.

Second, the requirements of the data owner differ depending on the use-case. These requirements range from the way to visualize the insights to performance (i.e. speed of insight generation and accuracy of generated results), data governance (geofencing), security and privacy, etc.