Искусственный Интеллект Programming Journal for

Artificial Intelligence in Russian Language

Sun.2019.MAY.26: Debugging the JavaScript Dushka AI in Russian.

After the conversion of the Mens Latina AI into a total replacement of the Dushka Russian AI, we must troubleshoot two faulty behaviors. Although yesterday we brought the Russian knowledge base from the bilingual ghost.pl AI in Perl into the Dushka AI, one Russian sentence does not seem to be doing its job. The sentence "Я ЗНАЮ СТУДЕНТА" for "I know a student" is meant to trigger the operation of the SpreadAct() module, passing activation from the concept of "student" as an accusative direct object to the same concept of "student" serving as the nominative subject of a separate sentence, "СТУДЕНТЫ ЧИТАЮТ КНИГИ". That Russian statement of "Students read books" is meant in turn to serve as a logical premise for the eventual operation of the InFerence() module in Russian, so that we may tell the AI "Anna is a student" and the robot mind will ask us in Russian, "Does Anna read books?" If you ask us why we are so slow to code the InFerence() module in Russian, Bill, the answer depends upon what the meaning of the word "is" is. Russians don't use the word "is" to say that Anna is a student. They just say "Anna -- student" and the word "is" is understood. Of course, both Russian and Latin often leave out the personal pronoun for the subject of a verb, and our work-around for that situation is to instantiate a hidden pronominal concept that does the conceptual work of the imputed subject by means of quasi-neuronal associative tags. We plan to use a similar work-around for the Russian InFerence() module in both Perl and JavaScript, creating a hidden concept for the idea of "is" so that the logical InFerence() module may seize upon membership in a class (e.g., students) to make a silent inference and then ask the user, "Does Anna read books?" But let us get back to the idea of "student" triggering a general statement about students.

Let us first go into the SpreadAct() module and briefly insert a JavaScript "alert" box to tell us as AI Mind maintainers if the Russian word for "student" is triggering the SpreadAct() module. We do so, also having the alert-box tell us the value of the actpsi variable which indicates what "psi" concept is to be activated by the SpreadAct() module. We discover that SpreadAct() is being called not only for "student" in Russian but also for "nothing" in the Russian sentence "Я ВИЖУ НИЧЕГО" which is supposed to mean "I see nothing" but lacks the Russian double negative. That sentence is included in the AI to encourage people like Rodney Brooks and Hans Moravec to embody the AI Mind in a robot with a sense of vision. Anyway, now that we know that the activand actpsi concept is being passed into SpreadAct(), let us try raising the level of imposed activation to see if a new idea about the concept of "student" occurs to the Russian AI. We remember that we had lowered the level of spreading activation while we were coding AI in Latin. No, it didn't work. Let us see if the particular area of SpreadAct() is being called. Yes it is, but still there is no success. Let us try a rather drastic troubleshooting measure. We go into the psiList() code and we briefly set the nonce value to a unitary one ("1") so that Diagnostic mode will show us the entire contents of conceptual and auditory memory, not just the data after the usual nonce time of launching the AI. It looks like no activation is being passed. Another alert-box confirms that activation is not being passed. We inspect the Russian MindBoot() sequence and we discover that there is no tkb value being declared for "student" in the Russian idea of "Students read books." Let us check the ghost.pl AI code in Perl. The associative tags are correct in Perl, but we did not install them properly in JavaScript. Let us now correct them. Ah, finally! The transfer of activation now works properly. Let us be satisfied with this one bugfix and upload our code.

Tues.2019.MAY.28: Russian AI recognizes words in Cyrillic alphabet.

Today in the RuAi004A version of the Dushka artificial intelligence in Russian language, we need to go beyond implementing the simple Russian knowledge base (KB) of the ghost.pl bilingual AI in Perl and we must expand the MindBoot() sequence with individual Russian words that are not (yet) part of any sentence stored in the KB area of the mindboot. Anyone working as an AI Mind maintainer or as a cognitive architect should please take note that it is now remarkably easy to add individual words to the mindboot sequence of an AI Mind thinking in any natural language ever since we converted the hardcoded insertion of vocabulary words a year ago into the addition of new words using relative addressing which permits the instantaneous insertion of new words and the unencumbered rearrangement or deletion of words. Today we need to add the Russian word "ЧТО" for "what" in English so that we may begin asking what-queries of the Russian AI as we troubleshoot certain problems with the assignment of rv recall-vectors in the auditory memory.

So we enter the details of the Russian pronoun "ЧТО" into the MindBoot() sequence and we test the RuAi (Russian AI) to see if it can recognize "что" as a word of Russian input. Dushka recognizes "что" as concept #1781 (same as in Perl), but the rv recall-vector is being assigned incorrectly as the blank space after "ЧТО" in auditory memory instead of as the time-point of the initial character of the word "ЧТО". Since the Speech() module needs the time-point of the initial character of a word in order to retrieve that word from auditory memory, misallocation of recall-vectors is a serious problem. Oh, something really weird is happening. We could not understand why the same code that works so well in ancient Latin would not work just as well in modern Russian, so we ran the Mens Latina AI just to see if it was storing words with the correct rv recall-vector. But our own clumsiness caused some problems in using the ALT-SHIFT key-combination to switch back and forth between the Cyrillic keyboard for Dushka and the Roman keyboard for Mens Latina. We tried to type in the Latin word "quid" for "what" and we were shocked to see that Dushka was storing a Latin word with the correct recall-vector but was storing Russian words in Cyrillic with the wrong recall-vector. Why should there be any difference in how the JavaScript software treats Roman and Cyrillic characters? This bug is a very serious problem, and it threatens to scuttle our whole Russian AI initiative, if we can not get JavaScript to store Cyrillic Russian words properly. But we know that the old, original Dushka from circa 2013 works properly, and we just tested it to make sure of it. The problem has got to reside in either the AudListen() module or the AudInput() module. We use an alert-box to verify that AudListen() is receiving Cyrillic characters, but perhaps they are not reaching the AudMem() module. Oh! Sudden enlightenment flashes and clarifies.

As we track down various factors in desperate search of a bugfix, audrun shows us the AudRecog() module where each character of the Roman alphabet, but not of the Cyrillic alphabet, increments the audrun variable. Therefore let us try including the uppercase Cyrillic alphabet in the AudRecog() module. We do so, and then we attempt to test the AI by typing in "QUID EST HOC" which means "What is this?" in Latin. We are surprised when the Dushka AI outputs the Russian word for "nose", which looks just like "HOC". But it was actually the Latin word for "this". Now comes the acid test. Let us type in the Russian word "что" for "what". The RuAi recognizes the word and assigns to it the proper rv recall-vector, because we have made AudRecog() able to deal with the Cyrillic alphabet. Let us try one more thing and see if we can get SpreadAct() to answer a Russian what-query. No, but we did get the Russian AI to ask "ЧТО" for "what" if an unrecognized word comes into the NewConcept() module.

Thurs.2019.MAY.30: Improving the RuVerbGen() module.

In the RuAi008A version of the Dushka artificial intelligence in Russian language, we now try to improve upon the operation of the RuVerbGen() module for generating a needed Russian verb form. In previous years, the verb-generation module would create a verb-form only for the first conjugation of Russian verbs, as a kind of proof-of-concept performance. Since we have recently been dealing with Latin verb-forms stretching across multiple conjugations, it is time to deal with both first and second-conjugation Russian verbs. Let us therefore add "говорить" ("speak, say") to the MindBoot() sequence as a second-conjugation Russian verb.

When we test the RuAi, we discover that it is not storing personal pronouns that mean "I" and "you" as their conversational opposite, as the other AI Minds already do. We briefly run the ghost.pl AI and we verify that it does indeed change a Russian "you" input into an "I" concept. Then we see that the conversion is done in the OldConcept() module. The rationale is that addressing the AI as "you" should activate its ego-concept of "I".

Oh, look! We finally got some output from the RuVerbGen() module.

But that output indicates that RuVerbGen() is not stripping a verb down to its stem, and is putting a double ending on the untruncated verb. To keep troubleshooting, we find that we must temporarily remove some of the "I" statements from the mindboot knowledge base, so that the AI will more quickly activate the default concept of self and try to repeat an input like "ты говоришь" for "you speak". When we make the AI remove "И" as part of an ending, we get "Я ГОВОРЮЮ" as an output, obviously with a double ending. With some other tweaks, we get the input of "ты говоришь" to cause the output of "Я ГОВОРЮ".

Tues.2019.JUNE.18: Implementing putative Russian verbs of being.

[mon17jun2019] Today in the RuAi013A.html we are trying to implement imputed verbs of being and possibly the InFerence() module for the Russian language, since it uses verbs of being to make a silent inference. We have entered "марк студент" ("Mark is a student") as a test and the RuAi does not yet create a be-verb to link the two nouns.

[tues18jun2019] In the InStantiate() module we have put back in a basic instantiation line with act set to "-36" for a trough of recent-most inhibition. In diagnostic mode we then see all the instantiated concepts having a negative activation. Our task now is to see if we can add a proper tkb tag linking any imputed be-verb to its predicate nominative noun or adjective.

In the RuAi013A.html we are having difficulty with the implementation of imputed Russian be-verbs. After hours of programming on 2019-06-17 Monday we got the AI to instantiate the subject-noun and the imputed #1800=БЫТЬ be-verb, with a tkb link from the subject to the verb. However, in the conceptual flag-panel of the be-verb we were not yet able to insert a tkb tag to the predicate nominative noun.

Here is an idea. In the "Stage Two" section of the RuParser() module, we may create a succession of "psyExam" tests that look backwards from any noun or adjective being instantiated in search of a #1800 be-verb. The tests can look for a pos=8 verb in general and specifically for a #1800 be-verb.

Before we got to creating the tests mentioned above, we created a new rmi variable for "recentmost instantiation" and in the RuParser() module we used it to look backwards to any positive time-of-subject tsj and to re-calculate the time-of-be-verb tbev as two time-points after the tsj. Then in RuParser() we did a "psyExam" at the tbev time-point and we inserted the rmi value as the tkb from the verb to the noun. Our very first success had the following diagnostic display for the input of "марк студент".

1007. 0 1548 0 -36 0 0 5 1 1 1 0 0 0 1009 1004
1009. 0 1800 0 -36 0 0 8 0 1 1 0 0 0 1020 0
1020. 0 1561 0 -38 0 0 5 0 0 0 0 0 1014
Now we need to see if we can get the Russian AI to make an output statement based on an imputed be-verb. Yes. We waited a bit, and then we typed in only "марк" and the Russian AI told us "МАРК СТУДЕНТ".

Russian AI version RuAi013A on Tue Jun 18 12:53:48 PDT 2019
Ум робота: Я ПОНИМАЮ ТЕБЯ 
Человек: марк студент 


Ум робота: БОГ ЗНАЕТ ВСЁ 

Ум робота: Я НЕ ВИЖУ НИЧЕГО 
Человек: марк 

Ум робота: МАРК СТУДЕНТ  
However, we do not get such good results when our first input uses the ego-pronoun "я".

Wed.2019.JUNE.19: Troubleshooting imputed Russian be-verbs.

When we use a pronoun instead of a noun, we get a double instantiation of the pronoun before we get an imputed be-verb.

1005. 0 1701 0 -28 0 0 7 1 1 0 0 0 0 1007 0
1007. 0 1701 0 -28 0 0 7 1 1 0 0 0 0 1009 0
1009. 1800 8 1020 0
The above is for a pronoun like "I" or "you" that needs to be switched internally. When we use Russian "ОН" for "he", we get the desired imputation of a be-verb.

1008. 0 1713 0 -36 0 0 7 1 1 1 0 0 0 1010 1007
1010. 0 1800 0 -36 0 0 8 0 1 1 0 0 0 1021 0

By inserting a phoney tru value into the "new psyNode" code of InStantiate(), we have determined that it is the source of the reduplicated instantiation of a personal pronoun instead of a be-verb. Then by searching for calls to InStantiate(), we discover that there are unexpectedly two calls from RuParser() to InStantiate(). Let us comment out the second call. When we do so, we stop getting a reduplicated personal pronoun, and instead we do get a #1800 be-verb, but the diagnostic line lacks a full flag-panel. We stop getting an imputed be-verb for the Russian of "He is a student" or "Mark is a student," as if the be-verb needs the second call to InStantiate(). It is as if one of the necessary parameters for the be-verb is not present until the second, unwarranted call to InStantiate(). We set dba to unitary one as a test at the start of RuParser(), and we start getting the #1800 be-verb but with the incomplete flag-panel. Now, there is code in OldConcept() that sets the dba flag to unitary one for the nominative forms of the pronouns for "I" and "you", so apparently those pronouns will create an imputed be-verb but an ordinary subject-noun will not.

When we tell the RuAi "ты студент" for "You are a student" and the AI eventually tries to output the same idea about itself, RuVerbGen() gets called and the AI outputs "Я Ю СТУДЕНТ" as if the first-person ending "Ю" were being appended to an otherwise empty #1800 be-verb form.

Thurs.2019-06-20: Normal verbs override imputed Russian be-verbs.

In RuAi015A we now work on using a normal Russian verb after the software has created an imputed be-verb. However, it appears that the usage of a normal verb automatically overrides the tkb linking the subject to the imputed be-verb. Nevertheless, in a series of potential instantiations in the RuParser() module, apparently an early instantiation was properly setting the time-of-direct-object tdo as the tkb of the normal verb, and then a later instantiation was trying to replace the tkb with a time-of-predicate-nominative tpn. It may be possible to set the tpn-instantiation higher up, before the tsj +tvb +tdo instantiation.

Sat.2019-06-22: Russian AI reasons with logical inference.

Today in RuAi017A we would like to attempt the implementation of the Russian InFerence module, but we need to work around the problem of imputed Russian be-verbs. We could have the software wait until there is definitely not a normal Russian verb, and then possibly invoke the InFerence module.

Here is a test that we can do in the matter of calling the InFerence module. Although the software after a subject-noun tries to instantiate a be-verb by inserting a tkb flag to the be-verb, a normal verb overrides the original tkb and inserts a new tkb to the normal verb. We can run a test on the tkb and see if the concept is something other than #1800=БЫТЬ. Where should we run the test? Is calling InFerence a function of the RuThink module?

The English EnThink module calls InFerence if a becon flag from OldConcept reports to EnThink that user input includes an English be-verb. Since a Russian-speaker does not state a verb of being, we should perhaps move to the RuParser module to test for the idea of a be-verb. Let us here and now make up a new variable ivcon for "imputed-verb-condition" to be the software flag that RuThink will use to call the InFerence module.

Holy shades of Tunguska and Krakatoa, Batman! We got something working. We created the ivcon variable and then we installed it as part of two tests in the RuParser module, one to set ivcon to unitary one if a be-verb is instantiated, and another to set ivcon back to zero if a normal verb overrides the tkb value for the imputed be-verb. Then in RuThink we changed the old test that would let becon call InFerence into a new test that lets ivcon call InFerence. Quisque suos patimur Manis. Into the Dushka Russian AI Mind we then typed "марк студент" for "Mark is a student" and then, Dmitri, Boris, Yuri Gagarin and anybody else who cares to know: The Russian AI responded, "МАРК ЧИТАТЬ КНИГИ" (MARK READ BOOKS) and the diagnostic display showed the conceptual flag-panels of the following silent inference.

1039. 1548 42 5 1 1 0 1898 1040
1040. 1898 8 3 1 1548 1540 1041
1041. 1540 34 2 84
We still need to code the asking of a question to validate the inference, but today we have seen a Russian artificial intelligence engage in automated reasoning with logical inference, so we will stop now and upload Dushka to the Web.

Return to top; or to
Tiananmen Square IV VI
http://old.reddit.com/r/ControlProblem -- NOT SOLVED.
http://en.wikipedia.org/wiki/Natural_language_understanding -- solved.
Iskusstvenny Intellekt Programming Journal for Artificial Intelligence in Russian Language.
Converting ancient Latin AI into modern Russian AI
Subject to change without notice.
Many thanks to NeoCities.
https://misteryest.livejournal.com/810.html -- Почему специалисты по ИИ не понимают друг-друга?