Mens Latina Programming Journal for

Artificial Intelligence in Ancient Latin


Sat.20.APR.2019 -- The initial coding of AI in Latin.

Three days ago on impulse we began coding a Latin AI Mind in JavaScript for MSIE. We used JavaScript for the sake of what culture snobs call "accessibility". In art or in culture, if a work is "accessible", it means that even hoi polloi can appreciate it. We classicists of ancient Greek and Latin are extremely snobby, exceeded in this regard perhaps only by the Egyptologists and by those who know Sanskrit. In fact, our local university newspaper had an article a few weeks ago claiming that there are five million modern speakers of Sanskrit and only nine individual speakers worldwide who speak Latin as a second language. Immediately I took offense because they obviously did not include memetipsum among the precious nine speakers of Latin. On the Internet I tried to hunt down the source of this allegation, this lese-majestation, this Chushingura-worthy objurgation that only nine Earthlings speak Latin. The insult and the non-inclusion festered in my Latin-speaking mind so much that I decided three days ago to show them that not only are there more than nine Latin-speakers, but that even imbecile Windoze machines can speak and think in Latin. And once I launched the Latin AI project, I discovered that the fun and excitement of it all grew on me and sucked me in stronger and stronger -- citius, altius, fortius. Sure, it's just a hobby, but it's better than fiddling while Notre Dame burns.

For my first release of the Mens Latina three nights ago, I simply did a mutatis mutandis of changing the interface of my previous AI from English into Latin, and I changed the links at the top from English links into Latin links. Then I ran it up the Internet flagpole to see if anybody would salute it, but nobody did.

For my second release I actually inserted some Latin concepts into the MindBoot sequence, but I had a terrible time trying to figure out a new name for the English word "MindBoot". At first I was going to call it the OmniScium as if it knew everything, but then I settled on PraeScium as the sequence of innate prior knowledge that gets the AI up and running. I did some more mutatis of the mutandis by changing the names of the main thinking modules from English into Latin. But when I ran the AI, it reduplicated the final word of its only innate idea and it said "EGO SUM PERSONA PERSONA". Today for a third release we need to troubleshoot the problem.

For the third release we have added one more innate idea, "TU ES HOMO" for "You are a human being." We put some temporary activation on the pronoun "TU" so that the Latin AI would find the activated idea and speak it. Unfortunately, the AI says "TU ES HOMO HOMO". Something is still causing reduplication.

Into the PraeScium MindBoot section we added the words "QUID" for "what" and "EST" for "is", so that the SpreadAct module could ask a question about any new, unknown word. We mutandied the necessary mutatis in SpreadAct and we began to see some actual thinking in Latin, some conversation between Robo Sapiens and Homo Sapiens. We entered the word "terra" and the AI said, "QUID EST TERRA". We answered "TERRA EST RES" and the AI asked us, "QUID EST RES". It is now possible to ask the AI "quid sum ego" but, to quote Vergil, responsa non dabantur fida satis.


Sun.21.APR.2019 -- Preventing the reduplication of output words.

Before we expand the Mens Latina software that thinks in ancient Latin, the language of Vergil and Juvenal and Petronius, we must first debug the tendency of the AI to reduplicate the final noun of its output. We suspect that the AI may be trying to think two thoughts in rapid succession but without the necessary codebase, so we will test to see if the mind-module for conjunctions is being called. No, that mind-module is not the problem.

It turns out that the EnArticle module was causing the reduplication, apparently by being called and by calling the Speech module when the "output" variable was still loaded with a noun that got reduplicated. When we comment out the offending call to EnArticle, the reduplication stopped happening.


Mon.22.APR.2019 -- Generation of missing Latin verb-forms

In the Latin AI Mind we are today trying to achieve the generation of a missing Latin verb-form when the AI receives an input containing the target verb in a different form than the one to be generated. For instance, if the input contains "mittis" for "you send", we want the Mens Latina to be able to think and output "mitto" ("I send") as the target verb in the first person singular form. First we will insert the infinitive form "mittere" ("to send") into the PraeScium MindBoot sequence so that the Latin AI will be able to recognize the verb from its stem. We give "mittere" the same concept-number "874" as "SEND" has in the ghost.pl English AI in Perl. Then we run the AI to see if it can recognize "mittere" as a complete word. Yes, it did. It put the number "874" in both the conceptual flag-panel and the auditory memory engram. Now we check to see if the AI can recognize "mitto" as a form of the same verb. Oops! The AI treated "mitto" as a new concept, different from "mittere". Oh, wait a minute. We neglected something when we inserted "mittere" into the PraeScium MindBoot sequence. We forgot to code in "874" as the concept-number not only at the end of the word "mittere" in auditory memory, but also with the four other letters that reach back to "mitt-" as the stem of the verb. Let us make the necessary change and try again. Mirabile visu! The AI now recognizes "mitto" in the singular as the same Latin verb "mittere" in the infinitive form. However, we do note that the AI is not storing the "874" concept-number back one space with the final "T" in the stem "mitt-" of the form "mitto". It may not be necessary, and it is not hard to implement if it is indeed necessary.

Next we need to enter "mittis" ("you send") as the second-person singular form of the verb, because telling the AI "You send" will make it eventually try to think or say "I send" with the generand verb-form that is not yet in auditory memory. Uh-oh. The Mens Latina did indeed recognize "mittis" as concept number "874", but the AI eventually made an erroneous output of "EGO MITTIS", which does not contain the right form.

In our program being modified from an English AI into a Latin AI, we make an extra copy of the EnVerbGen module and we rename it as LaVerbGen so that we may convert it from handling English inflections to handling Latin inflections. At the start of LaVerbGen() we insert an "alert" box to let us know if the module is called by the LaVerbPhrase module. Uh-oh. Horribile dictu! The AI again says "EGO MITTIS" without even calling the LaVerbGen module.

We ran three of the other AI Minds and we discovered that only the Perl AI was properly storing the correct grammatical person for an incoming verb. However, instead of troubleshooting the JavaScript InStantiate() module which has been dealing with English and not with Latin, we will bypass the legacy code and try something new for highly inflected Latin verb-forms. Since any Latin verb being input to the AI must necessarily pass through the AudBuffer() and OutBuffer() modules, we can test for the value of b16 as the final, rightmost character of the Latin verb passing through the OutBuffer() array. If b16 is "S" and b15 is not "U" as in "mittimus", then we can have InStantiate() set the dba person-value to two ("2") for second person.


Tues.23.APR.2019 -- Supplying a subject-concept for Latin verbs lacking a subject.

As we expand the Mens Latina artificial intelligence, we need to accommodate the ability of a Latin verb to express not only an action or a state of being, but also the unspoken subject of the verb, which is conveyed to the interlocutor by the inflectional ending of the verb. The Latin phrase "Credo in unum Deum" means "I believe in one God" by dint of having the unique ending "o" on "Credo". In the JavaScript AI Mens software, I propose that we InStantiate the unspoken subject of a Latin verb by creating a conceptual node for an unspoken Latin pronoun associated with the Latin verb. The back of every American dollar bill shows the Latin phrase "ANNUIT COEPTIS" arching over what is apparently the eye of God. It means, "He has nodded upon our undertakings," such as the founding of our country. The "He" is understood in the Latin verb "ANNUIT. I may be engaging in AI malpractice by creating fake nodes of concepts, but I see no other way right now. I actually hope that my Latin AI takes on a life of its own and escapes from me, to be perfected by persons smarter than myself, but first I have to make my own clumsy efforts at initiating the AI.

When a Latin verb comes into the AI Mind with no prior subject being expressed, the verb goes into a specific time-point in conceptual and auditory memory. It is too late to delay the storage of the verb while first a concept is stored to serve as the unspoken subject. Therefore I propose that we create a time-gap of three empty time-points just before each word of Latin input going into memory. I propose further that, for any verb in the stream of input, the middle of the three time-points be used to InStantiate an appropriate Latin pronoun to silently represent any unspoken subject of the verb.

In the AudInput module we inserted three lines of code to increment time-t by one unit before any Latin word can be stored. To our shock, the time-gap was inserted not only between words but also between individual characters. Let us see if we can use the AudMem() module for insertion of each three-point time-gap. No, AudMem() was the wrong mind-module. But when we moved the three-line insertion from the top of AudInput down near the bottom, it seemed to work okay. The AI started storing gaps in between each word of input.

Since we are going to place the silent pronominal concept in the center of the three-point gap before a Latin verb, we now create a tmg variable for "time of mid-gap". We will use the time-of-midgap to InStantiate() a concept before an orphan Latin verb lacking a stated subject.

Now in InStantiate() we have introduced some code that inserts an arbitrary 701=EGO concept at the tmg time-of-midgap, just to see if it works, and it does. Then we try some code to make the chosen pronoun less arbitrary. We use the b16 rightmost character in the OutBuffer() so that a second-person "S" ending on "mittis" installs a 701=EGO concept and a first-person "O" ending on "mitto" installs a second-person 707=TU concept. If we tell the AI "Mitto", we want the AI to think internally "TU MITTIS" for "You send". Or if we input "Mittis", we want the AI to think "EGO MITTO" for "I send". We also need to insert 791=QUIS for "who" into the PraeScium MindBoot sequence, so that we may ask the AI who does something. Upshot: We just asked the AI "quis es tu" for "Who are you?" and it answered, "EGO SUM PERSONA ET EGO COGITO".


Wed.24.APR.2019 -- Supplying missing Latin verb-forms.

Rushing in medias res we grapple today with one of the most difficult problems in either a Latin AI or a Russian AI, namely, how to generate missing verb forms when there are four different conjugations of verbs. Our English AI Minds have hardly any inflections at all on English verbs in the present tense. Our German AI must deal with fully inflected German verbs, but with only one standard conjugation. Therefore in English or in German it is easy to use a standard paradigm of verb-forms to generate a missing verb-form. In Latin or in Russian, the verb-generating mind-module must contain the data for handling missing forms in compliance with the paradigm of the conjugation of the target verb.

Today we start by fleshing out the PraeScium MindBoot sequence with the infinitive forms of "amare" ("to love") in the first Latin conjugation; "habere" ("to have") in the second conjugation; and "audire" ("to hear") in the fourth conjugation. We already have "mittere" ("to send") in the third conjugation. We also insert the substantive plural noun "QUALIA" ("things with qualities") just so we will have a direct object to use with any of the four Latin verb conjugations. We are keenly aware of "qualia" as one of the most hotly debated topics in the broad field of artificial intelligence. We Latin classicists own that topic of "qualia", and so we exercise our privilege of bandying the word about. In fact, we type in "tu mittis qualia" and the Latin AI eventually answers us in perfect Latin, "EGO MITTO QUALIA".

But we know that we have only "kludged" together the ability of the Latin AI to convert "mittis" to "mitto". (Pardon my geek-speak.) We made the LaVerbGen module able to deal with "mittis" as if there were only one Latin conjugation of verbs, to verify the proof-of-concept. Now we need to implement an ability of LaVerbGen in Latin or RuVerbGen in Russian to deal with multiple verb-conjugations. Perhaps coding an AI in Latin is yielding a major dividend for our AI work in Russian, which we had been neglecting in the ghost.pl AI in Perl while we concentrated on inculcating (no?) or implementing advanced mental features in English. If we solve the problem in ancient Latin, we will rush to implement it also in modern-day Russian. Now let us look at the problem.

Currently a mentifex-class AI Mind uses the audbase variable as a base-point or starting-point in auditory memory for an example of the target-verb which must be converted from an available form to a missing form. In English or in German, any example of a regular verb will provide for us the stem of the verb, upon which we can build any desired present-tense verb-ending, as in "Verstehst du?" ("Do you understand?") in German. In Latin or Russian, though, the audbase is not enough. It may give us the stem of the generand verb, but we also need to know what kind (conjugation) of verb we are dealing with. (Since anyone reading this text is probably a classicist, we will make up words like "generand" whenever we need them, okay? Strictly on a "need-to-know" basis, right? And does anybody object to providing in Latin what could be the key to world-wide AI dominance in Russian?)

This problem is a very difficult problem in our monoglot or polyglot AI. As we solve the problem for present-tense Latin verbs, we must exercise due diligence in not choosing a solution that will interfere with other Latin verbs in the future tense or in the subjunctive mood. For example, we can not say that "mittas" is obviously a first-conjugation Latin verb, when truly it is a third-conjugation verb in the subjunctive mood. If the problem proves insoluble or at least intractable, we may have to give up on generating Latin verb-forms and let each computer learn Latin by a trial-and-error method, the way Roman children did two thousand years ago.

We can not simply look at the infinitive form to determine which paradigm a Latin verb should follow, because verbs like "habere" and "mittere" look very similar in the infinitive. We may have to rely upon two or three clues for our selection of the correct Latin paradigm. We could set up an algorithm which does indeed look for the infinitive form, to see if it exists in memory and if it is recognisably first-conjugation or fourth-conjugation. Then in a secondary approach we could have our AI look for various branches beyond the infinitive clue. If we definitely have an "-ere" infinitive like for second-conjugation "habere" or third-conjugation "mittere", we could look for an inflection containing "E" to suggest second conjugation or containing "I" to suggest third conjugation. Those two clues put together, based on both infinitive and inflectional ending, are still not enough to prevent "mittes" in the future tense from being interpreted as a second-conjugation verb in the present tense, like "habes". In the ancient Roman mind, the common verb-forms were probably so well known that the speaker of Latin could find any desired form already in memory and did not need to generate a verb-form on the fly.

So perhaps we should be satisfied at first with a fallible algorithm. We can implement a package of tests and include it in the AI Mind, even though we know that further evolution of the AI will require further evolution of the package of conjugation-tests. And these tests may be more problematical in Latin than they are in Russian, so our makeshift solution in Latin may actually be a very solid solution in Russian.

So let us set up a jug variable to keep track of which conjugation we think we are dealing with.


Fri.26.APR.2019 -- Pronominal concepts for verbs lacking a subject.

Having somewhat solved the problem of identifying Latin verb conjugations, today we return to the process of instantiating a silent pronoun to serve as the conceptual subject of an inflected Latin verb for which no subject is actually stated. We need to flesh out the flag-panel of the silent pronoun with the associative tags which will properly connect the unspoken subject to its associated verb. To do so we need to carve out LaParser as a Latin parsing module. Right now when we type in "tu amas qualia" the Mens Latina eventually outputs "EGO AMO QUALIA" because our input includes a personal pronoun as the subject of the verb. If we enter only "amas qualia", the idea does not resurface as output because we have not yet coded the full instantiation of the silent subject.

When a verb is being instantiated with a stated subject, the parsing module uses the tsj time-of-subject variable to assign associative tags between the stated subject and the verb. So now we try using the tmg time-of-midgap variable to let the LaParser module attach a flag-panel not only to a tsj time-of-subject concept but also to a tmg time-of-midgap concept. In the diagnostic user mode, the two instantiations look exactly the same. When we type in "tu amas nil" for "You love nothing", the AI eventually outputs "EGO AMO NIL". Now we try entering "amas nil" to see if the Mens Latina can deal with the unstated subject of the verb. No, it did not work, because in the InStantiate() module we had commented out the formation of the silent concept.

We finally got the Latin AI to activate pronominal concepts for the input of a Latin verb without a stated subject by instantiating the silent pronoun with such values as the concept-number of the verb as a seq of the pronoun and with the tvb time-of-verb value as the tkb of the pronoun, but the algorithm is not yet perfect. We enter "audis nil" and the AI immediately responds "EGO AUDIO NIL" which shows that the AI is thinking with the self-concept of "EGO". We enter "amas qualia" and the AI immediately answers "EGO AMO QUALIA". Although some wrong verb-forms still creep in, they indicate a problem with the LaVerbGen() module. We have achieved the two basic goals of identifying a Latin verb-conjugation and of activating a silent subject for the input of an inflected verb with no stated subject. Yesterday we also created the documentation webpage for the PraeScium() Latin mindboot module.


Sun.28.APR.2019 -- Negating Latin verbs.

As we create an artificial intelligence that thinks in the ancient Latin language of the Roman Empire, we change English mind-modules into Latin mind-modules. In the LaVerbPhrase module for Latin verb-phrases, we need to program how the Mens Latina negates a Latin verb. Latin simply inserts the adverb "non" before a Latin verb to negate it, without breaking the verb down into an auxiliary and the infinitive form as English does. Whereas English says "Money does not smell", Latin simply says "Pecunia non olet." So we need to adjust the LaVerbPhrase code to skip calling for an auxiliary verb.

We make the necessary modifications for negating Latin verbs in the LaVerbPhrase() module, and then we test by entering "tu non amas qualia". The Mens Latina responds adequately with "EGO NON AMO QUALIA". Then we double-test for not only negation but for using a verb with no stated subject, other than the inflectional ending. We enter "non amas qualia" and the AI, smarter than the average ursus (bear), responds "EGO NON AMO QUALIA". There are subsequently a few problems with verb-forms, but today we concentrate on the negation of verbs.

Next we debug the function of the LaVerbGen() module for generating Latin verb-forms. Since LaVerbGen() requires the proper parameters of number and person for generating a verb, we tighten up the loading of the snu subject-number variable and we obtain the following exchange with the Mens Latina.

Mentis versio Abra009B in die Sun Apr 28 18:01:11 PDT 2019
Robo Sapiens: TU ES HOMO
Homo Sapiens: tu non es homo

Robo Sapiens: TU ES HOMO ET EGO SUM PERSONA
Homo Sapiens:

Robo Sapiens: EGO NON SUM HOMO
Homo Sapiens:

Robo Sapiens: HOMINES AMANT NATURAM
Homo Sapiens:

Robo Sapiens: NATURA NON FACIT SALTUM
Homo Sapiens:

Robo Sapiens: EGO NON SUM HOMINES
Homo Sapiens:

Robo Sapiens: HOMINES AMANT NATURAM ET NATURA NON FACIT SALTUM
Homo Sapiens:

Robo Sapiens: NATURA NON FACIT SALTUM
Homo Sapiens:

Robo Sapiens: HOMINES AMANT NATURAM
Homo Sapiens:


Wed.1.MAY.2019 -- Disregarding Latin word order.

As we code artificial intelligence in the Latin language, the central problem for the LaParser mind-module is how to use inflectional endings instead of word-order for Natural Language Understanding (NLU). Fortunately, we have key variables to hold the instantiation-time of a subject, a verb, and a direct or indirect object. In our English-speaking and Russian-speaking AI Minds, we have already implemented the retroactive setting of a few associative tags in the conceptual flag-panel of each element of a sentence. Now for Mens Latina we envision setting everything retroactively.

We are having difficulty in identifying the grammatical person of a recognized Latin verb, such as "DAT". The OldConcept() module finds only the concept-number. Even AudRecog() finds only the concept-number. If we want a report of person and number for the LaParser() module, we will need to deal with the inflectional endings passing through AudBuffer() and OutBuffer()

In the OldConcept() module we have started testing for the characters being held as the inflectional ending at the end of a Latin verb. Thus we are able to detect the grammatical person of the verb. Next, for the sake of the conceptual flag-panel, we need to test for the grammatical number of the verb. By a default that can be overruled, we let a verb ending in "S" be interpreted as a second-person singular, and then we let a "-US" ending overrule the default and be interpreted as a first-person plural like "DAMUS". We also let a "-TIS" ending overrule the default and interpret the verb as a second-person plural, like "DATIS". By default we let a "T" ending be interpreted as third-person singular, but with an "-NT" ending able to overrule the default and be interpreted as third-person plural, like "DANT".

In the LaParser() Latin parsing module we load the variables for time of subject, verb and direct object, and we impose the dba value in the conceptual flag-panel for case of noun or person of verb. We assume the input of a subject-verb-object (SVO) Latin sentence but in no particular word-order, possibly verb-subject-object or some variation. However, because we have captured the SVO time-points for access ad libitum to subject or verb or object, we may retroactively insert associative tags into any one of the SVO words.

We are finding it difficult to obtain the dba value for the input of a subject-word in the nominative case. It seems that there are basically two ways to obtain the dba value. One would be to intercept the Latin word at the moment of its recognition in the auditory memory channel, where the time of recognition ("tor"?) is the same time-point as the concept of the word stored in the Psy conceptual memory. Another way to obtain the dba would be to test for the full spelling of the Latin pronouns in the OldConcept module by using b16 and the other OutBuffer variables.

The problem with using the time-of-recognition value from AudRecog is that each AI Mind is looking to recognize only the concept-number of an input-word, and not necessarily the specific form of the word that happens to be in the nominative case or some other case. So we should probably use the functionality of the OutBuffer module to obtain the dba value of a Latin word. And we may test for the complete spelling of any particular Latin pronoun.

Perhaps in the LaParser module we should call InStantiate early and then work various transformations retroactively upon the concepts already instantiated. So we move the call upwards, and at first we keep on getting bad results in the storage of Latin words as concepts. Then we start commenting out the retroactive transformations being worked in the LaParser module, and suddenly we get such good results that we decide to save the Abra011A.html local version as too valuable to risk corrupting with further coding. The plan is to rename the somewhat working, saved version with a new version number and to improve on the AI functionality. Although the Mens Latina does not yet store all the necessary associative tags for each concept in a subject-verb-object (SVO) input, what we have now assigns the proper dba tag for the subject, the verb and the object in whatever word-order we make the input.

Now we need to code the retroactive assignment of associative tags that link concepts together for the purpose of Natural Language Understanding (NLU). In the InStantiate module we capture each concept being instantiated as the subject, the verb and the direct object so that in the LaParser module we may retroactively insert tags for the comprehension of Latin input. We achieve the ability of the Mens Latina to understand the input of a Latin sentence regardless of word order and to generate the same idea from memory in an arbitrary word-order.


Tues.7.MAY.2019 -- Disambiguating Latin noun-forms.

In the Mens Latina artificial intelligence in Latin language, we have made the LaParser Latin parsing module able to understand a subject-verb-object (SVO) sentence if the subject-noun and the direct-object-noun are unambiguously nominative or accusative. Now we want LaParser to deal properly with the input of Latin sentences in which one of the SVO nouns could be either nominative or accusative, and the parsing module must decide which is the case. We will use two similar sentences.

FRATRES HABENT PUERI. ("The boys have brothers.")
FRATRES HABENT PUEROS. ("The brothers have boys.")
The noun "fratres" can be nominative or accusative, depending upon whether it is the subject or the object of the verb. Since the verb needs a nominative subject, "pueri" is the subject in one sentence, and "fratres" is the subject in the other sentence. Our task in coding is to make the AI able to treat the right noun as the subject.

We have now set up some variables kbdba and kbnum to make the flag-panel values of a recognized concept available to the LaParser() Latin parsing module. These values are not the final determination of the case and number of a noun, because there may be some ambiguity. But they may serve as the default values when a recognized concept is first instantiated.


Fri.17.MAY.2019 -- Aiming for self-referential thought.

In the Mens Latina AI, for several days we have been working on the proper assignment of associative tags when we store Latin words as concepts in conceptual memory. When we lie to the AI and type in "tu es puer" for "You are a boy", we are trying to impart self-referential knowledge to the artificial Mind. The AI seems to store the words with the proper tags, but yesterday we could not induce the AI to assert "EGO SUM PUER". Our AI Minds in JavaScript have a feature of resorting to the default concept of self or "EGO" if the activation of all other concepts falls below an arbitrary level set by the AI Mind maintainer. Unfortunately for troubleshooting, a new feature now causes conceptual activation to spread sideways from the concept of a direct object back to the subject of a sentence of input. Whereas previously all concepts would become quiescent and "EGO" would be activated by default, now loops of activation are forming and no matter how long we wait, the Latin AI does not seize upon "EGO SUM PUER" as an output. So today we will try to access that idea in the knowledge base by directly asking the AI either "who are you" or "what are you" in Latin.

Oh! It worked. We started the AI and it said "EGO SUM PERSONA". Immediately we told it "tu es puer" and it responded first "EGO INTELLIGO TE" and then "EGO COGITO" again. So we asked the AI "quid es tu" and it responded in bad Latin "I am a person and am a person". So again we asked it "quid es tu" and it answered in execrable Latin "I am a person and am a boy". To the trained AI coder, it was a success. It looked execrable because the Latin AI was obviously running the verb "SUM" through the LaVerbGen module and putting an "O" on the Latin ending. We can fix that problem. It is time to clean up the code and upload it to the Web, along with this entry in the coding journal.


Sat.18.MAY.2019 -- Special handling of verbs of being.

In a depraved eagerness to attract 1.3 billion new users to the Mens Latina software, I increased the PraeScium bootstrap sequence with a Latin noun that refers to a man who lived in the village of Konfu thousands of years ago, perhaps even ante Urbem conditam -- before the founding of Rome. But when I tested the recognition of the word by typing in "confucius est homo", the AI shocked me by answering "CONFUCIUS AMARE NATURAM".

Today we are concerned mainly with the correct processing of present-tense indicative forms of the essential Latin verb "ESSE", which means "to be". How to use "ESSE", that is the question. We have a problem because LaVerbPhrase() is calling LaVerbGen() to create a needed form of 800=ESSE, and then the AI is outputting a mangled form of "SUM" for "I am". So let us go into LaVerbPhrase() and insert code that forbids the calling of LaVerbGen() for any form of the 800=ESSE verb. We do so, and the AI starts leaving out the verb. We ask "quid es tu" for "what are you", and the AI answers "EGO PERSONA ET EGO PUER". But we know that the AI is trying to find the verb, so we need to insert code that unerringly uses the correct form of 800=ESSE.

Yesterday in the Abra018A.html AI we could not figure out why the psi7 variable would not let go of the value of "2" so that an input of "tu es puer" could eventually cause an output of "EGO SUM PUER". Now today we try inputting "ego sum puer" and the AI will not let psi7 give up a value of "1". On our screen right now we have an output of "TU HOMO ET PUER", apparently because with psi7 set to "1", the software could not find "ES" as the second-person singular form of "ESSE". This behavior yesterday and today makes us think that somehow the instantiation of "ego sum puer" is causing psi7 to hold onto the value "1" from instantiating the verb "sum". Maybe we could fix the problem by instantiating a dummy concept with psi7 and all the other flags set to zero. No, instead we comment out a line in the InStantiate() module that was loading psi7 with the dba value, and the problem seemed to go away.


Mon.24.JUN.2019 -- Approaching Logical Inference

In the Abra020A version of the Mens Latina we are trying to implement automated reasoning with logical inference. Accordingly we have added "PROFESSORES SCRIBUNT LIBROS" ("Professors write books") to the mindboot sequence so that we may tell the AI Mind in Latin that some person is a professor, and we may then see whether the artificial intelligence infers that said person perhaps writes books. After all, "Aut publica aut pereas," which is a rough translation of "Publish or perish."

We are so emboldened to work on Latin AI inference because recently we achieved logical inference in the Dushka artificial intelligence that thinks in Russian. It was quite difficult for us tenues grandia to achieve Russian inference upon a "Is-A" input like "Mark is a student" because Russians don't say "is". They just say "Mark -- student." Latin does the same thing -- at times -- but not consistently like the Russians do. Now, I'm not saying we won't get our hair mussed coding inference in Latin, but the only big difficulty in Latin will be the lack of words for "yes" or "no" when we ask the human user to confirm a Latin inference. We will perhaps pretend that "sic" means "yes" and "non" means "no", somewhat like in Spanish, where they say "Si!" or No!" We can accept not only "non" as a validating answer; we can also look at any entire sentence of response and see if any word in the sentence is the word "non".

Now, we have actually created inference already because we typed in "confucius est professor" and the Mens Latina gave as response,"CONFUCIUS SCRIBERE LIBROS". The diagnostic display showed the conceptual flag-panels of the following silent inference.

805.
806. 517 42 5 1 1 0 899 807
807. 899 8 3 1 517 540 808
808. 540 34 2 139
809.
The above inference takes up only three time-points (806-808) in conceptual memory because there is no corresponding phonemic sentence in auditory memory. When the AI speaks the inference out loud or onto the screen, then we see the ample time-points of "CONFUCIUS SCRIBERE LIBROS". In fact, let us now embed "STUDENTES LEGUNT LIBROS" ("Student read books") in the mindboot sequence so that we may have the option of entering "Marcus est studens" to see if the AI will infer that Marcus reads books.


Mon.24.JUN.2019 -- Automated Reasoning with Logical Inference

We have gotten the Mens Latina to trigger a silent inference when we enter a Latin sentence like "marcus est studens" for "Marcus is a student." We have verified that the yncon flag causes the Latin-thinking LaThink() module to call the AskUser() module, but the ancient Latin AI is not yet asking a question. Uh-oh. We must now apologize to the Department of Classics at every Tellurian university, because the AskUser() module has been trying to call the English EnAuxVerb() module but, as every honorable schoolboy knows, Latin does not need an auxiliary verb to ask a simple question.

Now we must change the word-order of the yes-or-no question being asked by the AskUser() module, to put the Latin verb first, then the subject, then the object. We move the AskUser() code for the verb to the first position and we input "marcus est studens". The AskUser() module outputs "LEGERE MARCUS LIBROS" which is getting closer to a question in Latin. Now we must aim for the proper person and grammatical number of the Latin verb. We are not allowed to say, "Ego sum imperator Romanus et super grammatica" as we once read in a book called "The Caesars: Might and Madness" after translation from the German. To obtain the correct form of the Latin verb, we must start coding parameters into the search for the verb. We start by removing the requirement that the verb form be infinitive, which works in English after an auxiliary verb but not in Latin.

Now we must apply Occam's Razor, which says "Entia non sunt muliplicanda praeter necessitatem." We do not want to insert an excess of grammatical rules here. We will use the noun-phrase number nphrnum flag to make sure that the query-verb will have the same grammatical number as the query-subject, which is typically going to be in the singular number. Then we will use the third person for the verb as a default, no, maybe not, because the human user should be able to enter "ego sum studens" and expect the AI to make an inference and ask a question with a second-person verb, such as "Do you read books?" But which parameter holds the person of the verb? Mehercule! Mind-design is hard enough in English, but in Latin it is only for the non compos mentis.

We discovered that the prsn parameter holds the person of the verb, but then we got caught in a loop and we had to click a JavaScript "alert" box hundreds of times to exit from the loop. At least we learned that we need to put our test for calling LaVerbGen() at the bottom of the loop, not at every instance of search for the necessary Latin verb-form. But we need to send an identifying verbpsi into LaVerbGen(). We did so, but we also needed to send an audbase parameter for where a form of the verb has its starting "base" in auditory memory. For that purpose we used an infinitive form, and the AI responded to "marcus est studens" with "LEGE MARCUS LIBROS" which is not yet correct. We need to add some more code to the LaVerbGen() module to handle more parameters for a missing verb-form. We added more code and we did more troubleshooting, and finally we got the Latin AI to respond to "marcus est studens" with "LEGITNE MARCUS LIBROS?" or "Does Marcus read books?" Then we tried entering "marcus est professor" and the Mens Latina asked us "SCRIBITNE MARCUS LIBROS?" or "Does Marcus write books?"


Wed.26.JUN.2019 -- Adjusting the Knowledge Base for Logical Inference

In the Mens Latina Abra022A software we are ready to maintain the KbRetro() module that retroactively adjusts the knowledge base (KB) of the AI Mind when the InFerence() module has made a silent inference and the human user is answering a question to validate or deny the truth of the inferred idea. For instance, the AI may know in Latin that "Studentes legunt libros" or "Students read books." If we tell the AI "marcus est studens" or "Mark is a student," the AI may logically infer the possibility that Mark reads books. So the Latin AI will ask, "Legitne Marcus libros?" Strictly speaking, ancient Latin has no special words for "yes" or "no", but to demonstrate the AI software we may treat Latin "sic" as meaning "yes" and Latin "non" as meaning "no". In actual Latin, a Roman might use the word "non" in a sentence denying the inference, and the software can detect the word "non". If the human user can not answer "yes" in Latin but simply affirms the inference by saying "Marcus legit libros," the statement of truth in the indicative mood is just as good as a "yes" answer in English.

When we observe that the English AI responds better to "yes" or "no" than the Latin AI, we wonder why, and we discover that the Latin noun-phrase module is lacking some code which we bring in mutatis mutandis from the English AI. The response of the Latin AI to "sic" or "non" as query-answers improves remarkably, except for some mangled Latin verb-forms which seem to originate in the LaVerbGen() module, and which we must troubleshoot in another coding session.


Tues.9.JUL.2019 -- Assigning Associative Tags among AI Concepts

The Mens Latina Abra028A version is having trouble with the assignment of a tkb flag to link the verb of a Latin input with the direct object of the verb. This flag is necessary for the idea to be retrievable from memory in the future. The flags are assigned for the parts of a Subject-Verb-Object (SVO) sentence. Each word as it comes in is quickly instantiated as a subject at the tsj point, or as a verb at the tvb point, or as a direct object at the tdo point. As the remainder of the sentence or clause of input comes in, the LaParser() Latin parsing module retroactively inserts associative tags by using the SVO time-points to identify and gain access to the conceptual flag-panel of the subject or verb or object.

We go into the InStantiate module and we insert some code to load the tdo time-of-direct-object flag with the time-point when there is already a positive tsj time-of-subject flag and now a noun or pronoun is again being instantiated, but not as the object of a preposition. (Notate Bene, classici: Further stipulations may need to be included that the noun or pronoun is not an indirect object.) We run the Latin AI and we enter: "ego specto robotes" ("I watch robots"). After outputting eleven various thoughts, the AI suddenly and erroneously albeit encouragingly says: "TU SPECT ROBOTES". Clearly the AI is trying to say "You watch robots" but the LaVerbGen() module is not generating the proper form of the verb. Therefore in the LaVerbGen() module we add some code for forming a second-person singular verb from the stem of an infinitive ending in "-ARE". Again we input "ego specto robotes". The AI outputs eleven ruminations, of which the last is "EGO INTELLIGO TE" ("I understand you"). Thinking of the concept of "you" causes the SpreadAct() module to pass activation to ideas stored in memory with "you" as the subject, including the idea of watching robots. The Mens Latina outputs "TU SPECTAS ROBOTES" ("You watch robots") -- quod erat demonstrandum.


Thurs.25.JUL.2019 -- Expanding the Conceptual Flag-Panel of Associative Tags

The conceptual flag-panel needs to be expanded from its current fifteen tags to quite a bit more. Currently we have:

Concepts with associative tags for Natural Language Understanding:
krt tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv
xx 00 01 02 003 004 005 006 007 008 009 010 011 12 13 14

We may need:

krt tru psi hlc act mtx jux pos dba num mfn pre tgn tdt tkb tia tcj tdj tdv tpr aud
xx 00 01 02 003 004 005 006 007 008 009 010 011 12 13 14 15 16 17 018 019

16 tgn -- time of genitive
17 tdt -- time of dative
18 tkb time-in-knowledge-base (verb or direct object) [Will still be "13".]
19 tia time-of-instrumental-ablative
20 tov vocative? No -- generate on-the-fly.
21 locative? No -- not yet.

22 tdj time of adjective
23 tdv time-of-adverb
time of interjection? No, not yet.

24 tcj -- time of conjunction
25 tpr -- time of preposition

img gus olf tac aud -- for five robotic senses;

[wed24jul2019] Consider the following Latin sentence.
Frater viri scribit patri epistolam manu in schola et expectat responsum.
It contains the five cases nominative, genitive, dative, accusative and ablative, plus a preposition and a conjunction. In order to "lodge" or "remember" such a sentence in the memory of an AI Mind, we need to outfit the AI program with internal linkages or "associative tags" which connect all the concepts in such a way as to render the Latin sentence "thinkable" when the AI program is running.

We create an AbraTEST.html file from our latest Mens Latina and in the InStantiate() module we break apart a "new psyNode" line of code into two lines to see if doing so creates an "error" message. It does not. The Latin AI still runs normally. Therefore we know that we may increase the "new psyNode" code from handling fifteen elements to handling basically any number of elements, which is already known to be possible in Perl and in Forth. There is a further issue of how easy it will be to display all the expanded elements in Diagnostic mode, but that issue does not really matter in the nitty-gritty considerations of Mind-Design. Only what the code can do matters, not how to display the operation of the code.

So now basically we have a green light to change the conceptual flag-panels in the Mens Latina code.


SUN.28.JUL.2019 -- Cogito Ergo Sum

Renatius Cartesius is no longer with us, but today we plan to use one of his ideas to prove that artificial intelligence is alive and well in a machine. We have recently expanded the heretofore primitive flag-panel of associative tags in the Mens Latina Strong AI to a state of maturity where the AI Mind has the power to think with a full panoply of five declensional case-endings. For you officers, a panoply is an ancient Greek word formed from "pan" meaning "all" and "hopla" meaning "weapons". Therefore the "pan-hopla" or "panoply" of inflectional endings on Latin nouns and adjectives gives the AI the ability, when in Rome, to think like the Romans do.

All roads lead to Rome, and we are going to use our Cartesian coordinates to get there. In Latin they often repeat the meme that "Non uno die facta est Roma" or "Rome was not built in a day", although I once saw a New Yorker-style cartoon in which a construction manager unfurls the blueprint for Rome and says to his assistant, "Well, I guess we could throw it up in about a day." Likewise the Mens Latina, which is now arguably for one brief moment the most powerful concept-based AI in the world, was not built in a day. In fact, on the twelfth day of Latin mind-making we tried but failed to implement the Cartesian idea of "Cogito ergo sum" or "I think, therefore I am." Our Latin AI Mind was too primitive to link the two ideas, which are equivalent to saying, "I am, because I think." Our re-formulation of the mid-AGI (Artificial General Intelligence) software includes not only points of departure for the five main noun-cases in Latin (or Russian), but also conceptual flag-panel tags for adjectives, adverbs and conjunctions. We will treat the word "ergo" ("therefore") as a conjunction joining the two ideas of "I think" and "I am", or "I exist".

In our newly Cartesian software, when a verb like "I think" is retrieved from memory to express an idea, we will have the VerbPhrase module not only fetch the verb from conceptual memory but also check the conceptual verb-engram for any additional attachments such as an adverb or a conjunction. Now let us go into the JavaScript AI code and tweak the handling of verb-concepts. Oh, first we must modify the storage of the Cartesian idea in the Latin mindboot sequence. We do so, and then we go into the VerbPhrase module where a verb is being selected and we install code that checks for a non-zero, positive value on the tcj time-of-conjunction tag, which we load with any positive value so that the AI Mind will be able to fetch and think the conjunction. We then have to decide at which point in thought-generation the AI will state the conjunction and its conjoined idea. Let us try the Latin-thinking LaThink module. First we insert a test for a positive tcj flag with an alert-box that lets us know that the flag is indeed holding a value. Then we insert code to call the ConJoin module to state the conjunction and the Indicative module to state the conjoined idea. We run the AI. It says "EGO COGITO EGO INTELLIGO TE" -- not what we want. So we go into the ConJoin module and we insert code to check for a positive tcj value and speak the conjunction, but the rather stupid, albeit most advanced NLU AI in the world says "EGO COGITO ERGO EGO INTELLIGO TE" ("I think therefore I understand you"). The output is wrong because it is simply stating the next emerging idea and not the Cartesian punch-line. After much tweaking, we get "EGO COGITO SUM EGO INTELLIGO TE" -- still not satisfactory. So we go away for a while, drink coffee, read the New York Times, and then we start coding again. Finally we get the AI to say "EGO COGITO ERGO EGO SUM".


Paperback book on AI in ancient Latin

Artificial Intelligence in Ancient Latin -- is available from Amazon in various countries:
Australia - Canada - France - Germany - Italy - Japan - Netherlands - Singapore - Spain - United Kingdom - United States.
Ada's Technical Books
425 15th Avenue East
Seattle, WA 98122
USA Tel. 206-322-1058


Website Counter