Find below lectures 23 and 24 of the course machine learning.
The LCS and GBML community stop
Find below lectures 23 and 24 of the course machine learning.
Lecture 23
[Slides – pdf]
Lecture 24
[Slides – pdf]
Find below lectures 23 and 24 of the course machine learning.
Over the last few decades, it has been shown that GAs (and derivate methods such as GPs) are able to solve complex real-world problems and rediscover engineering and scientific findings which were originally deduced after many years of investigation. Recently, Hod Lipson and Michael Schmidt have provided the scientific community with another cool […]
Over the last few decades, it has been shown that GAs (and derivate methods such as GPs) are able to solve complex real-world problems and rediscover engineering and scientific findings which were originally deduced after many years of investigation. Recently, Hod Lipson and Michael Schmidt have provided the scientific community with another cool application of GAs. In this case, Lipson and Smith designed a system that was able to extrapolate the laws of motion from pendulum’s swings.
The program starts with a set of data that describes the pendulum’s swings. Then, the program first creates random combinations of basic mathematical processes such as addition, substraction, multiplication, division, and a few more algebraic operators. Therefore, each individual forms an equation that explains the data. Then, the population of individuals is evolved by the typical genetic operators. This approach resulted in equations that are very similar to the law of conservation of momentum and Newton’s law of motion.
The paper associated to this research has been recently published in Science.
Find below lectures 21 and 22 of the course machine learning.
Lecture 21
[Slides – pdf]
Lecture 22
[Slides – pdf]
Find below lectures 21 and 22 of the course machine learning.
Find below lectures 19 and 20 of the course machine learning.
Lecture 19
[Slides – pdf]
Lecture 20
[Slides – pdf]
Find below lectures 19 and 20 of the course machine learning.
Some time ago I was told about the project Wolfram|Alpha by the creator of Mathematica and the author of a new kind of science (NKS), Stephen Wolfram. This project aims at going beyond the typical process of search engines by proposing a system that computes the answers of user questions. That is, instead of […]
Some time ago I was told about the project Wolfram|Alpha by the creator of Mathematica and the author of a new kind of science (NKS), Stephen Wolfram. This project aims at going beyond the typical process of search engines by proposing a system that computes the answers of user questions. That is, instead of going to the data an retrieve information by the syntactic similarity with the user question, the new architecture will try to figure it out the answer, which may not be explicitly written in the web documents, by processing the data. For this purpose, Wolfram proposes to use Mathematica and the NKS to explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable. In addition, there must be the help of human experts to formalize each domain.
Therefore, a new approach, very different to that of natural language processing, that promises to make knowledge computable. Fortunately, I will need to wait only two monts to answer all the questions that arose after reading the Wolfram blog.
Find below lectures 17 and 18 of the course machine learning.
Lecture 17
[Slides – pdf]
Lecture 18
[Slides – pdf]
Find below lectures 17 and 18 of the course machine learning.
Some few days ago, while preparing my lectures about neural networks, I ran into the video “The next generation of neural networks” by Geoffrey Hinton, one of the pioneers in machine learning and in the field of neural networks in particular.
Hinton starts the talk by presenting the first generation of neural networks, which included systems […]
Some few days ago, while preparing my lectures about neural networks, I ran into the video “The next generation of neural networks” by Geoffrey Hinton, one of the pioneers in machine learning and in the field of neural networks in particular.
Hinton starts the talk by presenting the first generation of neural networks, which included systems such as perceptrons (which could not adapt), and the second generation of neural networks, which included methods to let the network to adapt, such as Backpropagation. After referring to Backpropagation as a huge disappointment, Hinton quickly shifts to other systems such as support vector machines (a clever perceptron according to Hinton), restricted Boltzmann machines, and some particular machines that can do amazing things.
In summary, a video really worth watching, which presents neural networks in a nice way that can be easily understood for non-experts in this field as me.
Find below lectures 15 and 16 of the course machine learning.
Lecture 15
[Slides – pdf]
Lecture 16
[Slides – pdf]
Find below lectures 15 and 16 of the course machine learning.
Find below lectures 13 and 14 of the course machine learning.
Lecture 13
[Slides – pdf]
Lecture 14
[Slides – pdf]
Find below lectures 13 and 14 of the course machine learning.
Find below lectures 11 and 12 of the course machine learning.
Lecture 11
[Slides – pdf]
Lecture 12
[Slides – pdf]
Find below lectures 11 and 12 of the course machine learning.