Artificial intelligence is an autonomous decision-making system that, by applying *machine learning* algorithms , learns from the examples given.

In a Google talk (in Madrid) I was recently explained this example that I think is very good:

Imagine that you want to do a program to correct spelling.

If you do the program by programming as you have always done, you would have to define all the rules of Spanish grammar, use and conjugation of verbs, grammatical structures, etc. **You would have a perfect program, in exchange for having taken a long time to do it. **If you would like to do this spell checker for another language (English), you should practically start from scratch. Putting rules of English grammar, conjugation of verbs, grammatical structures …

With artificial intelligence, there would be a process to learn the rules of grammar, and instead of explaining them to you, you give all the examples you can (all the books in Spanish that you had in electronic format) and the system would take care of ‘ discover ‘the orthographic rules from the examples with which the system is fed. **This system takes less time to have a solution (it costs ‘less’ hours of programming than in the previous case), but you need (1) to have many examples and (2) to assume that the answer is not always correct** (the intelligence systems artificial ones usually give an answer with a percentage of probability, so that sometimes, they fail.

However, once the spell checker is done in Spanish, if we wanted to do it in English, we would have done it already; we simply have to feed our Artificial Intelligence system with data (electronic books) in English. Or in Russian. Or in Mandarin Chinese.

Examples in real life:

- Google anti-spam filters. When we click on the emails as Spam, Google learns about it and looks for similar patterns to mark other emails that can also be Spam.
- The Watson (IBM) system -which by the way is lagging behind- was fed with x-rays + cancer diagnoses and corrected a diagnosis of an erroneous patient.

As math progressed through the 20th Century, we kinda realized that there were tons of relationships between various mathematical objects that looked the same but weren’t actually related at all. This was a bit because we tend to look at the components that make up a mathematical object, like elements of a set, numbers, points in a space, loops in a space, operations, equations, etc, etc, etc. But, if instead we looked at mathematical objects themselves as the constituents of a theory as a whole, we could illuminate what these theories are a little bit better.

For instance, we have the theory of sets. And when we look at things in terms of set theory, we generally want to know what things make up the elements of a particular set. But the category theory of set theory does not care. It cares about how different sets relate to each other, and the way sets relate to each other is functions between them. So instead of understanding the set A by asking what its elements are, we understand the set A by asking what are all the functions from A to other sets.

This mindset is incredibly useful, because it allows us to ignore unimportant specifics and focus on the important structural components instead. Moreover, it allows us to explicitly compare different theories together. We can transform the theory of topological spaces into the theory of linear algebra, allowing use to easily conclude tough things about weird spaces using simple algebra.

**TL;DR:** Look at the relationships between things, rather than the components of a thing.