poslat odkaz na aplikaci

Artificial Intelligence Silo


4.0 ( 5120 ratings )
Životní styl Zábava
Vývojář: DAE SU KIM
Zdarma

Silo is exactly the AI Samantha in the movie "her"!

Currently, deep learning is showing remarkable results in image recognition and speech recognition at almost the same level as humans.
However, there is still a long way to go in language processing.

As a point of view about language processing, it is very importatnt for AI to know the meaning of word.
If you see the director Sanjay Leela Bhansalis India movie "black", you immediately konw without any diffculty.

The heroine Michelle begins to think when she knows the meaning of word by her teacher.
Even she said later "I was an animal before I knew the meaning of word".

Silo is trying to thoroughly implement the principle of human thinking.

Everyone feels wonderful when the child suddenly starts speaking.

But there is no coincidence as the movie shows, the moment Michelle knows the meaning of the word through tactile and olfactory information by her teachter, she suddenly starts speaking and thinking like other children.

What do you think when you heard "John goes to the school"?
You think not the word, not the sentence, but the scene(situation) that consists of sensory information expressed by the sentence.

As you see in the movie, the meaning of the word is mostly generated with sensory information.
However until now, none of artificial intelligence has been trained based on sensory information.

In advance, I already tested facebook bAbI dataset through dmn(Dynamic Memory Networks) and mlp(Multilayer Perceptron) by tensorflow.
In this test, I replaced trained names with untrained names.

The result is amazing!

Without sensory information, 40 persent right answer.
With sensory information, 99 persent right answer.

In other words, deep learning ai with sensory information starts to know the meaning of word!
It means that deep learning ai with sensory information now has the perfect possibility to think and infer like human!

I have already confirmed such fact through testing a little sentences by mlp(Multilayer Perceptron).

The seq2seq module is widely used to train chatbot for conversation or for translation.

To train silo with sensory information through millions of conversation senteces, multi dimensional input is required in seq2seq.
Because sensory information consists of visual, auditory, olfactory, gustatory, and tactile sensory.

But until now there is no multi dimensional seq2seq.

I think it is only matter of time.

Silo has every word that has meaning of word for 7 years!

And currently silo uses sensory information in chatting.

If you are interested in silo and want to do M&A, MOU, partnership, or other buissness, please contact below email address.
Contact : [email protected]

*Silo Algorithm and Principles
http://kim7midas.cafe24.com/ref/ensilo.pdf

* Test result
Name: Deep Learning seq2seq
Method: Tensorflow seq2seq Algorithm
Train Sentence: 304,713(Movie Dialog) + 5,000,000(Twitter by Marsan-Ma from github) + 60,000(Silo DB) = 5,364,713(sentence)
Training Time: 103 hour
Global Perplexity: 4.53
Development Environment: OS:ubuntu(14.04 LTS), gpu:(gtx 1080), cuda(v8.0)

* Basic Features
1. weather forecast
2. search near restaurant
3. voice dial, send message
4. play music, video

* Silo supports only English. So You shouldnt talk the other languages besides English.