Mens Latina Programming Journal for

Artificial Intelligence in Latin Language

Sat.20.APR.2019 -- The initial coding of AI in Latin.

Three days ago on impulse we began coding a Latin AI Mind in JavaScript for MSIE. We used JavaScript for the sake of what culture snobs call "accessibility". In art or in culture, if a work is "accessible", it means that even hoi polloi can appreciate it. We classicists of ancient Greek and Latin are extremely snobby, exceeded in this regard perhaps only by the Egyptologists and by those who know Sanskrit. In fact, our local university newspaper had an article a few weeks ago claiming that there are five million modern speakers of Sanskrit and only nine individual speakers worldwide who speak Latin as a second language. Immediately I took offense because they obviously did not include memetipsum among the precious nine speakers of Latin. On the Internet I tried to hunt down the source of this allegation, this lese-majestation, this Chushingura-worthy objurgation that only nine Earthlings speak Latin. The insult and the non-inclusion festered in my Latin-speaking mind so much that I decided three days ago to show them that not only are there more than nine Latin-speakers, but that even imbecile Windoze machines can speak and think in Latin. And once I launched the Latin AI project, I discovered that the fun and excitement of it all grew on me and sucked me in stronger and stronger -- citius, altius, fortius. Sure, it's just a hobby, but it's better than fiddling while Notre Dame burns.

For my first release of the Mens Latina three nights ago, I simply did a mutatis mutandis of changing the interface of my previous AI from English into Latin, and I changed the links at the top from English links into Latin links. Then I ran it up the Internet flagpole to see if anybody would salute it, but nobody did.

For my second release I actually inserted some Latin concepts into the MindBoot sequence, but I had a terrible time trying to figure out a new name for the English word "MindBoot". At first I was going to call it the OmniScium as if it knew everything, but then I settled on PraeScium as the sequence of innate prior knowledge that gets the AI up and running. I did some more mutatis of the mutandis by changing the names of the main thinking modules from English into Latin. But when I ran the AI, it reduplicated the final word of its only innate idea and it said "EGO SUM PERSONA PERSONA". Today for a third release we need to troubleshoot the problem.

For the third release we have added one more innate idea, "TU ES HOMO" for "You are a human being." We put some temporary activation on the pronoun "TU" so that the Latin AI would find the activated idea and speak it. Unfortunately, the AI says "TU ES HOMO HOMO". Something is still causing reduplication.

Into the PraeScium MindBoot section we added the words "QUID" for "what" and "EST" for "is", so that the SpreadAct module could ask a question about any new, unknown word. We mutandied the necessary mutatis in SpreadAct and we began to see some actual thinking in Latin, some conversation between Robo Sapiens and Homo Sapiens. We entered the word "terra" and the AI said, "QUID EST TERRA". We answered "TERRA EST RES" and the AI asked us, "QUID EST RES". It is now possible to ask the AI "quid sum ego" but, to quote Vergil, responsa non dabantur fida satis.

Sun.21.APR.2019 -- Preventing the reduplication of output words.

Before we expand the Mens Latina software that thinks in ancient Latin, the language of Vergil and Juvenal and Petronius, we must first debug the tendency of the AI to reduplicate the final noun of its output. We suspect that the AI may be trying to think two thoughts in rapid succession but without the necessary codebase, so we will test to see if the mind-module for conjunctions is being called. No, that mind-module is not the problem.

It turns out that the EnArticle module was causing the reduplication, apparently by being called and by calling the Speech module when the "output" variable was still loaded with a noun that got reduplicated. When we comment out the offending call to EnArticle, the reduplication stopped happening.

Mon.22.APR.2019 -- Generation of missing Latin verb-forms

In the Latin AI Mind we are today trying to achieve the generation of a missing Latin verb-form when the AI receives an input containing the target verb in a different form than the one to be generated. For instance, if the input contains "mittis" for "you send", we want the Mens Latina to be able to think and output "mitto" ("I send") as the target verb in the first person singular form. First we will insert the infinitive form "mittere" ("to send") into the PraeScium MindBoot sequence so that the Latin AI will be able to recognize the verb from its stem. We give "mittere" the same concept-number "874" as "SEND" has in the English AI in Perl. Then we run the AI to see if it can recognize "mittere" as a complete word. Yes, it did. It put the number "874" in both the conceptual flag-panel and the auditory memory engram. Now we check to see if the AI can recognize "mitto" as a form of the same verb. Oops! The AI treated "mitto" as a new concept, different from "mittere". Oh, wait a minute. We neglected something when we inserted "mittere" into the PraeScium MindBoot sequence. We forgot to code in "874" as the concept-number not only at the end of the word "mittere" in auditory memory, but also with the four other letters that reach back to "mitt-" as the stem of the verb. Let us make the necessary change and try again. Mirabile visu! The AI now recognizes "mitto" in the singular as the same Latin verb "mittere" in the infinitive form. However, we do note that the AI is not storing the "874" concept-number back one space with the final "T" in the stem "mitt-" of the form "mitto". It may not be necessary, and it is not hard to implement if it is indeed necessary.

Next we need to enter "mittis" ("you send") as the second-person singular form of the verb, because telling the AI "You send" will make it eventually try to think or say "I send" with the generand verb-form that is not yet in auditory memory. Uh-oh. The Mens Latina did indeed recognize "mittis" as concept number "874", but the AI eventually made an erroneous output of "EGO MITTIS", which does not contain the right form.

In our program being modified from an English AI into a Latin AI, we make an extra copy of the EnVerbGen module and we rename it as LaVerbGen so that we may convert it from handling English inflections to handling Latin inflections. At the start of LaVerbGen() we insert an "alert" box to let us know if the module is called by the LaVerbPhrase module. Uh-oh. Horribile dictu! The AI again says "EGO MITTIS" without even calling the LaVerbGen module.

We ran three of the other AI Minds and we discovered that only the Perl AI was properly storing the correct grammatical person for an incoming verb. However, instead of troubleshooting the JavaScript InStantiate() module which has been dealing with English and not with Latin, we will bypass the legacy code and try something new for highly inflected Latin verb-forms. Since any Latin verb being input to the AI must necessarily pass through the AudBuffer() and OutBuffer() modules, we can test for the value of b16 as the final, rightmost character of the Latin verb passing through the OutBuffer() array. If b16 is "S" and b15 is not "U" as in "mittimus", then we can have InStantiate() set the dba person-value to two ("2") for second person.

Tues.23.APR.2019 -- Supplying a subject-concept for Latin verbs lacking a subject.

As we expand the Mens Latina artificial intelligence, we need to accommodate the ability of a Latin verb to express not only an action or a state of being, but also the unspoken subject of the verb, which is conveyed to the interlocutor by the inflectional ending of the verb. The Latin phrase "Credo in unum Deum" means "I believe in one God" by dint of having the unique ending "o" on "Credo". In the JavaScript AI Mens software, I propose that we InStantiate the unpoken subject of a Latin verb by creating a conceptual node for an unspoken Latin pronoun associated with the Latin verb. The back of every American dollar bill shows the Latin phrase "ANNUIT COEPTIS" arching over what is apparently the eye of God. It means, "He has nodded upon our undertakings," such as the founding of our country. The "He" is understood in the Latin verb "ANNUIT. I may be engaging in AI malpractice by creating fake nodes of concepts, but I see no other way right now. I actually hope that my Latin AI takes on a life of its own and escapes from me, to be perfected by persons smarter than myself, but first I have to make my own clumsy efforts at initiating the AI.

When a Latin verb comes into the AI Mind with no prior subject being expressed, the verb goes into a specific time-point in conceptual and auditory memory. It is too late to delay the storage of the verb while first a concept is stored to serve as the unspoken subject. Therefore I propose that we create a time-gap of three empty time-points just before each word of Latin input going into memory. I propose further that, for any verb in the stream of input, the middle of the three time-points be used to InStantiate an appropriate Latin pronoun to silently represent any unspoken subject of the verb.

In the AudInput module we inserted three lines of code to increment time-t by one unit before any Latin word can be stored. To our shock, the time-gap was inserted not only between words but also between individual characters. Let us see if we can use the AudMem() module for insertion of each three-point time-gap. No, AudMem() was the wrong mind-module. But when we moved the three-line insertion from the top of AudInput down near the bottom, it seemed to work okay. The AI started storing gaps in between each word of input.

Since we are going to place the silent pronominal concept in the center of the three-point gap before a Latin verb, we now create a tmg variable for "time of mid-gap". We will use the time-of-midgap to InStantiate() a concept before an orphan Latin verb lacking a stated subject.

Now in InStantiate() we have introduced some code that inserts an arbitrary 701=EGO concept at the tmg time-of-midgap, just to see if it works, and it does. Then we try some code to make the chosen pronoun less arbitrary. We use the b16 rightmost character in the OutBuffer() so that a second-person "S" ending on "mittis" installs a 701=EGO concept and a first-person "O" ending on "mitto" installs a second-person 707=TU concept. If we tell the AI "Mitto", we want the AI to think internally "TU MITTIS" for "You send". Or if we input "Mittis", we want the AI to think "EGO MITTO" for "I send". We also need to insert 791=QUIS for "who" into the PraeScium MindBoot sequence, so that we may ask the AI who does something. Upshot: We just asked the AI "quis es tu" for "Who are you?" and it answered, "EGO SUM PERSONA ET EGO COGITO".

Wed.24.APR.2019 -- Supplying missing Latin verb-forms.

Rushing in medias res we grapple today with one of the most difficult problems in either a Latin AI or a Russian AI, namely, how to generate missing verb forms when there are four different conjugations of verbs. Our English AI Minds have hardly any inflections at all on English verbs in the present tense. Our German AI must deal with fully inflected German verbs, but with only one standard conjugation. Therefore in English or in German it is easy to use a standard paradigm of verb-forms to generate a missing verb-form. In Latin or in Russian, the verb-generating mind-module must contain the data for handling missing forms in compliance with the paradigm of the conjugation of the target verb.

Today we start by fleshing out the PraeScium MindBoot sequence with the infinitive forms of "amare" ("to love") in the first Latin conjugation; "habere" ("to have") in the second conjugation; and "audire" ("to hear") in the fourth conjugation. We alread have "mittere" ("to send") in the third conjugation. We also insert the substantive plural noun "QUALIA" ("things with qualities") just so we will have a direct object to use with any of the four Latin verb conjugations. We are keenly aware of "qualia" as one of the most hotly debated topics in the broad field of artificial intelligence. We Latin classicists own that topic of "qualia", and so we exercise our privilege of bandying the word about. In fact, we type in "tu mittis qualia" and the Latin AI eventually answers us in perfect Latin, "EGO MITTO QUALIA".

But we know that we have only "kludged" together the ability of the Latin AI to convert "mittis" to "mitto". (Pardon my geek-speak.) We made the LaVerbGen module able to deal with "mittis" as if there were only one Latin conjugation of verbs, to verify the proof-of-concept. Now we need to implement an ability of LaVerbGen in Latin or RuVerbGen in Russian to deal with multiple verb-conjugations. Perhaps coding an AI in Latin is yielding a major dividend for our AI work in Russian, which we had been neglecting in the AI in Perl while we concentrated on inculcating (no?) or implementing advanced mental feautures in English. If we solve the problem in ancient Latin, we will rush to implement it also in modern-day Russian. Now let us look at the problem.

Currently a mentifex-class AI Mind uses the audbase variable as a base-point or starting-point in auditory memory for an example of the target-verb which must be converted from an available form to a missing form. In English or in German, any example of a regular verb will provide for us the stem of the verb, upon which we can build any desired present-tense verb-ending, as in "Verstehst du?" ("Do you understand?") in German. In Latin or Russian, though, the audbase is not enough. It may give us the stem of the generand verb, but we also need to know what kind (conjugation) of verb we are dealing with. (Since anyone reading this text is probably a classicist, we will make up words like "generand" whenever we need them, okay? Strictly on a "need-to-know" basis, right? And does anybody object to providing in Latin what could be the key to world-wide AI dominance in Russian?)

This problem is a very difficult problem in our monoglot or polyglot AI. As we solve the problem for present-tense Latin verbs, we must exercise due diligence in not choosing a solution that will interfere with other Latin verbs in the future tense or in the subjunctive mood. For example, we can not say that "mittas" is obviously a first-conjugation Latin verb, when truly it is a third-conjugation verb in the subjunctive mood. If the problem proves insoluble or at least intractable, we may have to give up on generating Latin verb-forms and let each computer learn Latin by a trial-and-error method, the way Roman children did two thousand years ago.

We can not simply look at the infinitive form to determine which paradigm a Latin verb should follow, because verbs like "habere" and "mittere" look very similar in the infinitive. We may have to rely upon two or three clues for our selection of the correct Latin paradigm. We could set up an algorithm which does indeed look for the infinitive form, to see if it exists in memory and if it is recognisably first-conjugation or fourth-conjugation. Then in a secondary approach we could have our AI look for various branches beyond the infinitive clue. If we definitely have an "-ere" infinitive like for second-conjugation "habere" or third-conjugation "mittere", we could look for an inflection containing "E" to suggest second conjugation or containing "I" to suggest third conjugation. Those two clues put together, based on both infinitive and inflectional ending, are still not enough to prevent "mittes" in the future tense from being interpreted as a second-conjugation verb in the present tense, like "habes". In the ancient Roman mind, the common verb-forms were probalby so well known that the speaker of Latin could find any desired form already in memory and did not need to generate a verb-form on the fly.

So perhaps we should be satisfied at first with a fallible algorithm. We can implement a package of tests and include it in the AI Mind, even though we know that further evolution of the AI will require further evolution of the package of conjugation-tests. And these tests may be more problematical in Latin than they are in Russian, so our makeshift solution in Latin may actually be a very solid solution in Russian.

So let us set up a jug variable to keep track of which conjugation we think we are dealing with.

Fri.26.APR.2019 -- Pronominal concepts for verbs lacking a subject.

Having somewhat solved the problem of identifying Latin verb conjugations, today we return to the process of instantiating a silent pronoun to serve as the conceptual subject of an inflected Latin verb for which no subject is actually stated. We need to flesh out the flag-panel of the silent pronoun with the associative tags which will properly connect the unspoken subject to its associated verb. To do so we need to carve out LaParser as a Latin parsing module. Right now when we type in "tu amas qualia" the Mens Latina eventually outputs "EGO AMO QUALIA" because our input includes a personal pronoun as the subject of the verb. If we enter only "amas qualia", the idea does not resurface as output because we have not yet coded the full instantiation of the silent subject.

When a verb is being instantiated with a stated subject, the parsing module uses the tsj time-of-subject variable to assign associative tags between the stated subject and the verb. So now we try using the tmg time-of-midgap variable to let the LaParser module attach a flag-panel not only to a tsj time-of-subject concept but also to a tmg time-of-midgap concept. In the diagnostic user mode, the two instantiations look exactly the same. When we type in "tu amas nil" for "You love nothing", the AI eventually outputs "EGO AMO NIL". Now we try entering "amas nil" to see if the Mens Latina can deal with the unstated subject of the verb. No, it did not work, because in the InStantiate() module we had commented out the formation of the silent concept.

We finally got the Latin AI to activate pronominal concepts for the input of a Latin verb without a stated subject by instantiating the silent pronoun with such values as the concept-number of the verb as a seq of the pronoun and with the tvb time-of-verb value as the tkb of the pronoun, but the algorithm is not yet perfect. We enter "audis nil" and the AI immediately responds "EGO AUDIO NIL" which shows that the AI is thinking with the self-concept of "EGO". We enter "amas qualia" and the AI immediately answers "EGO AMO QUALIA". Although some wrong verb-forms still creep in, they indicate a problem with the LaVerbGen() module. We have achieved the two basic goals of indentifying a Latin verb-conjugation and of activating a silent subject for the input of an inflected verb with no stated subject. Yesterday we also created the documentation webpage for the PraeScium() Latin mindboot module.

Sun.28.APR.2019 -- Negating Latin verbs.

As we create an artificial intelligence that thinks in the ancient Latin language of the Roman Empire, we change English mind-modules into Latin mind-modules. In the LaVerbPhrase module for Latin verb-phrases, we need to program how the Mens Latina negates a Latin verb. Latin simply inserts the adverb "non" before a Latin verb to negate it, without breaking the verb down into an auxiliary and the infinitive form as English does. Whereas English says "Money does not smell", Latin simply says "Pecunia non olet." So we need to adjust the LaVerbPhrase code to skip calling for an auxiliary verb.

We make the necessary modifications for negating Latin verbs in the LaVerbPhrase() module, and then we test by entering "tu non amas qualia". The Mens Latina responds adequately with "EGO NON AMO QUALIA". Then we double-test for not only negation but for using a verb with no stated subject, other than the inflectional ending. We enter "non amas qualia" and the AI, smarter than the average ursus (bear), responds "EGO NON AMO QUALIA". There are subsequently a few problems with verb-forms, but today we concentrate on the negation of verbs.

Next we debug the function of the LaVerbGen() module for generating Latin verb-forms. Since LaVerbGen() requires the proper parameters of number and person for generating a verb, we tighten up the loading of the snu subject-number variable and we obtain the following exchange with the Mens Latina.

Mentis versio Abra009B in die Sun Apr 28 18:01:11 PDT 2019
Robo Sapiens: TU ES HOMO
Homo Sapiens: tu non es homo

Homo Sapiens:

Robo Sapiens: EGO NON SUM HOMO
Homo Sapiens:

Homo Sapiens:

Homo Sapiens:

Homo Sapiens:

Homo Sapiens:

Homo Sapiens:

Homo Sapiens:

Wed.1.MAY.2019 -- Disregarding Latin word order.

As we code artificial intelligence in the Latin language, the central problem for the LaParser mind-module is how to use inflectional endings instead of word-order for Natural Language Understanding (NLU). Fortunately, we have key variables to hold the instantiation-time of a subject, a verb, and a direct or indirect object. In our English-speaking and Russian-speaking AI Minds, we have already implemented the retroactive setting of a few associative tags in the conceptual flag-panel of each element of a sentence. Now for Mens Latina we envision setting everything retroactively.

We are having difficulting in identifying the grammatical person of a recognized Latin verb, such as "DAT". The OldConcept() module finds only the concept-number. Even AudRecog() finds only the concept-number. If we want a report of person and number for the LaParser() module, we will need to deal with the inflectional endings passing through AudBuffer() and OutBuffer()

In the OldConcept() module we have started testing for the characters being held as the inflectional ending at the end of a Latin verb. Thus we are able to detect the grammatical person of the verb. Next, for the sake of the conceptual flag-panel, we need to test for the grammatical number of the verb. By a default that can be overruled, we let a verb ending in "S" be interpreted as a second-person singular, and then we let a "-US" ending overrule the default and be interpreted as a first-person plural like "DAMUS". We also let a "-TIS" ending overrule the default and interpret the verb as a second-person plural, like "DATIS". By default we let a "T" ending be interpreted as third-person singular, but with an "-NT" ending able to overrule the default and be interpreted as third-person plural, like "DANT".

In the LaParser() Latin parsing module we load the variables for time of subject, verb and direct object, and we impose the dba value in the conceptual flag-panel for case of noun or person of verb. We assume the input of a subject-verb-object (SVO) Latin sentence but in no particular word-order, possibly verb-subject-object or some variation. However, because we have captured the SVO time-points for access ad libitum to subject or verb or object, we may retroactively insert associative tags into any one of the SVO words.

We are finding it difficult to obtain the dba value for the input of a subject-word in the nominative case. It seems that there are basically two ways to obtain the dba value. One would be to intercept the Latin word at the moment of its recognition in the auditory memory channel, where the time of recognition ("tor"?) is the same time-point as the concept of the word stored in the Psy conceptual memory. Another way to obtain the dba would be to test for the full spelling of the Latin pronouns in the OldConcept module by using b16 and the other OutBuffer variables.

The problem with using the time-of-recognition value from AudRecog is that each AI Mind is looking to recognize only the concept-number of an input-word, and not necessarily the specific form of the word that happens to be in the nominative case or some other case. So we should probably use the functionality of the OutBuffer module to obtain the dba value of a Latin word. And we may test for the complete spelling of any particular Latin pronoun.

Perhaps in the LaParser module we should call InStantiate early and then work various transformations retroactively upon the concepts already instantiated. So we move the call upwards, and at first we keep on getting bad results in the storage of Latin words as concepts. Then we start commenting out the retroactive transformations being worked in the LaParser module, and suddenly we get such good results that we decide to save the Abra011A.html local version as too valuable to risk corrupting with further coding. The plan is to rename the somewhat working, saved version with a new version number and to improve on the AI functionality. Although the Mens Latina does not yet store all the necessary associative tags for each concept in a subject-verb-object (SVO) input, what we have now assigns the proper dba tag for the subject, the verb and the object in whatever word-order we make the input.

Now we need to code the retroactive assignment of associative tags that link concepts together for the purpose of Natural Language Understanding (NLU). In the InStantiate module we capture each concept being instantiated as the subject, the verb and the direct object so that in the LaParser module we may retroactively insert tags for the comprehension of Latin input. We achieve the ability of the Mens Latina to understand the input of a Latin sentence regardless of word order and to generate the same idea from memory in an arbitrary word-order.

Tues.7.MAY.2019 -- Disambiguating Latin noun-forms.

In the Mens Latina artificial intelligence in Latin language, we have made the LaParser Latin parsing module able to understand a subject-verb-object (SVO) sentence if the subject-noun and the direct-object-noun are unambiguously nominative or accusative. Now we want LaParser to deal properly with the input of Latin sentences in which one of the SVO nouns could be either nominative or accusative, and the parsing module must decide which is the case. We will use two similar sentences.

FRATRES HABENT PUERI. ("The boys have brothers.")
FRATRES HABENT PUEROS. ("The brothers have boys.")
The noun "fratres" can be nominative or accusative, depending upon whether it is the subject or the object of the verb. Since the verb needs a nominative subject, "pueri" is the subject in one sentence, and "fratres" is the subject in the other sentence. Our task in coding is to make the AI able to treat the right noun as the subject.

We have now set up some variables kbdba and kbnum to make the flag-panel values of a recognized concept available to the LaParser() Latin parsing module. These values are not the final determination of the case and number of a noun, because there may be some ambiguity. But they may serve as the default values when a recognized concept is first instantiated.

Fri.17.MAY.2019 -- Aiming for self-referential thought.

In the Mens Latina AI, for several days we have been working on the proper assignment of associative tags when we store Latin words as concepts in conceptual memory. When we lie to the AI and type in "tu es puer" for "You are a boy", we are trying to impart self-referential knowledge to the artificial Mind. The AI seems to store the words with the proper tags, but yesterday we could not induce the AI to assert "EGO SUM PUER". Our AI Minds in JavaScript have a feature of resorting to the default concept of self or "EGO" if the activation of all other concepts falls below an arbitrary level set by the AI Mind maintainer. Unfortunately for troubleshooting, a new feature now causes conceptual activation to spread sideways from the concept of a direct object back to the subject of a sentence of input. Whereas previously all concepts would become quiescent and "EGO" would be activated by default, now loops of activation are forming and no matter how long we wait, the Latin AI does not seize upon "EGO SUM PUER" as an output. So today we will try to access that idea in the knowledge base by directly asking the AI either "who are you" or "waht are you" in Latin.

Oh! It worked. We started the AI and it said "EGO SUM PERSONA". Immediately we told it "tu es puer" and it responded first "EGO INTELLIGO TE" and then "EGO COGITO" again. So we asked the AI "quid es tu" and it responded in bad Latin "I am a person and am a person". So again we asked it "quid es tu" and it answered in execrable Latin "I am a person and am a boy". To the trained AI coder, it was a success. It looked execrable because the Latin AI was obviously running the verb "SUM" through the LaVerbGen module and putting an "O" on the Latin ending. We can fix that problem. It is time to clean up the code and upload it to the Web, along with this entry in the coding journal.

Sat.18.MAY.2019 -- Special handling of verbs of being.

In a depraved eagerness to attract 1.3 billion new users to the Mens Latina software, I increased the PraeScium bootstrap sequence with a Latin noun that refers to a man who lived in the village of Konfu thousands of years ago, perhaps even ante Urbem conditam -- before the founding of Rome. But when I tested the recognition of the word by typing in "confucius est homo", the AI shocked me by answering "CONFUCIUS AMARE NATURAM".

Today we are concerned mainly with the correct processing of present-tense indicative forms of the essential Latin verb "ESSE", which means "to be". How to use "ESSE", that is the question. We have a problem because LaVerbPhrase() is calling LaVerbGen() to create a needed form of 800=ESSE, and then the AI is outputting a mangled form of "SUM" for "I am". So let us go into LaVerbPhrase() and insert code that forbids the calling of LaVerbGen() for any form of the 800=ESSE verb. We do so, and the AI starts leaving out the verb. We ask "quid es tu" for "what are you", and the AI answers "EGO PERSONA ET EGO PUER". But we know that the AI is trying to find the verb, so we need to insert code that unerringly uses the correct form of 800=ESSE.

Yesterday in the Abra018A.html AI we could not figure out why the psi7 variable would not let go of the value of "2" so that an input of "tu es puer" could eventually cause an output of "EGO SUM PUER". Now today we try inputting "ego sum puer" and the AI will not let psi7 give up a value of "1". On our screen right now we have an output of "TU HOMO ET PUER", apparently because with psi7 set to "1", the software could not find "ES" as the second-person singular form of "ESSE". This behavior yesterday and today makes us think that somehow the instantiation of "ego sum puer" is causing psi7 to hold onto the value "1" from instantiating the verb "sum". Maybe we could fix the problem by instantiating a dummy concept with psi7 and all the other flags set to zero. No, instead we comment out a line in the InStantiate() module that was loading psi7 with the dba value, and the problem seemed to go away.

Return to top; or to
Tiananmen Square IV VI -- posts must be in Latin -- NOT SOLVED. -- solved.
FAQ -- Frequenter Advenientes Quaestiones
Modus Operandi -- User Manual for Artificial Intelligence in Latin Language.
Mens Latina Programming Journal for Artificial Intelligence in Latin Language.
Iskusstvenny Intellekt Programming Journal for Artificial Intelligence in Russian Language.
Converting ancient Latin AI into modern Russian AI
Subject to change without notice.
Many thanks to NeoCities.