Scientists and researchers have tried to do this, but it's a difficult problem and current methods used are not as advanced. Deep machine learning is one such example of this, however researchers found that it is good for processing information, but it can struggle with reasoning.
However, the latest player to enter the game is called Relational Networks (RNs) and it was discussed in a paper by DeepMind, which is the world leader in artificial intelligence research.
DeepMind have basically attempted to enable machines to reason by tracking RNs to convolutional neural networks and recurrent neural networks, both traditionally used for computer vision and natural language processing.
"It is not that deep learning is unsuited to reasoning tasks — more that the correct deep learning architectures, or modules, did not exist to enable general relational reasoning. For example, convolutional neural networks are unparalleled in their ability to reason about local spatial structure — which is why they are commonly used in image recognition models — but may struggle in other reasoning tasks," a DeepMind spokesperson said in a recent interview.
RNs are described as "plug-and-play" modules. They are designed in a way where the architecture allows the network to focus on the relationship between pairs of objects.
"We speculate that the RN provided a more powerful mechanism for flexible relational reasoning, and freed up the CNN to focus more exclusively on processing local spatial structure. This distinction between processing and reasoning is important," DeepMind's paper said.
RN has also shown promising signs when it comes to reason with language.
It is still early days, but the hope is that RNs can be applied to different problems such as modeling social networks and more abstract problem solving.