Perlmind Programming Journal (PMPJ)

The Perlmind Programming Journal (PMPJ) is both a tool in developing Perlmind open-source artificial intelligence (AI) and an archival record of the history of how the Perlmind AI evolved over time.

Sun.12.APR.2015 -- Mentifex AI moves into Perl.

Since the Mentifex AI Minds are in need of a major algorithmic revision, it makes sense to reconstitute the Mentifex Strong AI in a new programming environment, namely Perl, beyond the original Mentifex encampments first in REXX (1993-1994), then in Forth (1998-present) and finally in JavaScript (2001-present). With Perl, we remain in a scripting language, but a language more modern and more prevalent than Forth. We savor the prospect of ensconcing our Perl mind-modules within the prestigious and Comprehensive Perl Archive Network (CPAN), where we already proposed some AI nomenclature a dozen years ago. With Perl we open up the mind-boggling and Mind-propagating vistas of seeding the noosphere with explosively metastatic and metempsychotic Perl AI that can transfer its files and its autopoiesis instantaneously across and beyond the vastness of the World Wide Web.

Sun.12.APR.2015 -- Downloading the Perl Language

Next we need to do a Google search-and-deploy mission for obtaining a viable version of the Perl language for our Acer Aspire One netbook running Windows XP home edition.

Ooh, sweet! When we search for "download Perl" on Google, we are immediately directed to which presents to us a choice among the Unix/Linux, MAC OS X, and Windows operating systems. Although we wish we were on 64-bit Linux so that we could be listed in a GNU/Linux AI website, we had better choose between ActiveState Perl and Strawberry Perl for our current Windows XP platform. Let's click on the link for Download Strawberry Perl because it is a 100% Open Source Perl for Windows without the need for binary packages. recommends that we use the "latest stable version, currently 5.20.2." and Strawberry Perl (32 bit) is offered to us. When we first click on the download, a Security Warning asks is whether we want to run or save this 68.6MB file. We click to save the file on our Acer Aspire One netbook. Huh? Almost instantaneously, after we see that the target will be our Acer C-drive, we get a pop-up window that says that we have completed a download not of 68.6 megabytes, but that we have downloaded 116KB in one second to C:\strawberry-perl- and we may now click on "Run" or "Open Folder" or "Close". Let us click on "Run" to see what happens. Now we get another Security Warning that "The publisher could not be verified. Are you sure you want to run this software?" Its name is "strawberry-per- msi" and we can click on "Run" or "Don't Run". Let's click on "Run". It starts to show a green download transfer, but suddenly it stops and a "Windows Installer" message says, "This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package." So we go back to where we had the choice between "Run" and "Save" and this time we click "Run" instead of "Save." In a space of between two and three minutes, the package downloads into a "temporary folder." Then a Security Warning says, "The publisher could not be verified. Are you sure you want to run this software?" Let's click "Run." Now it says "preparing to install" and "wait for the set-up wizard." Finally it says "The Setup Wizard will install Strawberry Perl on your computer. Click Next to continue or Cancel to exit Setup." Well, I have a complaint. Why did the process not work when I tried to "Save" the download instead of merely "Running" it for what I was afraid would be one single time? Why is the process of installing Perl so obfuscated and so counter-intuitive? Well anyway, let's click on "Next" and get with the program. Next we have to click the checkbox for "I accept the terms in the License Agreement." Now for a Destination Folder the Strawberry Perl Setup says to "Click Next to install to the default folder or click Change to choose another." C:\Strawberry\ is good enough for Mentifex here. Then we "Click Install to begin the installation." Oops. "Error reading from file C:\Documents and Settings\Arthur\Local Settings\Temporary Internet Files\ Content.IR5\R6BYZW40\strawberry-perl-[1].msi. Verify that the file exists and that you can access it." Now we have ended prematurely because of an error. Then we went back again to the initial download process and we went with "Run" instead of "Save," and wonder of wonders, we were able to download Perl. We will "Click the Finish button to exit the Setup Wizard," and we will read the Release Notes and the README file" available from the start menu. Aha! Upon clicking the Windows XP "start" button, we proceed into "All Programs" through "Strawberry Perl" to Strawberry Perl README in a Notepad file on-screen.

Sun.12.APR.2015 -- Learning to program Perl Strong AI

Now we have to figure out how to run a program in Perl. We go to Learning Perl at says to check that you have Perl installed by entering
perl -v
and so we actually do
C:\Strawberry\perl -v and it works! It says "This is perl 5, version 20, subversion 2 (v5.20.2)" etc. Next with the MS-DOS make-directory "md" command we md perl_tests to create a "perl_tests" subdirectory.

Then we open the Notepad text editor and we create a file that we call not but rather because we want to start programming Perl artificial intelligence immediately. C:\Strawberry>perl /path/to/perl_tests/ is what we try to run. At first we get "No such file or directory" but when we changed directory and entered
C:\Strawberry\perl_tests>perl we saw:
hi and so we have run our first Perlmind AI program.

Sun.12.APR.2015 -- Perl Strong AI Resources

Table of Contents (TOC)

Perlmind Programming Journal (PMPJ) -- details for 2015 May 10:

Our next concern is how we will save auditory engrams into the @aud auditory memory array with not only the $pho phonemic character being saved, but with other elements horizontally saved into a line of data. In departure from the previous AI Minds in REXX, in Forth and in JavaScript, we wish to tighten up and simplify the number of items being saved beyond each character itself. The JavaScript AI saves auditory engrams in the following format.

krt pho act pov beg ctu audpsi
631. { 
632. I 0 # 1 0 701
Before we go about eliminating some of the legacy items from the flag-panel, first we have to learn how to save the flag-panel in Perl.

We have gotten the Perl program to display the contents of @aud auditory memory on a line-by-line basis with the following main-loop code.

while ($t < $cns) {  # PERL by Example (2015), p. 190 
  $age = $age + 1;   # Increment $age variable with each loop.
  print "\nMain loop cycle ", $age, "  \n";  # Display loop-count.
  sensorium(); # PERL by Example p. 350: () empty parameter list 
  think();     # PERL by Example p. 350: () empty parameter list 
  if ($age eq 999) { die "Perlmind dies when time = $t \n" }; # safety
  if ($t > 30) {
    do { 
      # print @aud;  # show contents of $aud array
      print $aud[$krt], "\n"; # Show each successive character in memory.
      $krt++;     # increment $krt
    } while ($krt < 10);  # show @aud array at all time-points  
  };  # outer braces
}  # End of main loop calling Strong AI subroutines 
Next we need to add such flag-panel items as the time-points and the activation-level.

Perlmind Programming Journal (PMPJ) -- details for 2015 May 13:

In the Perlmind as a third-generation ("3G") AI, we need to change the "AudMem" module in combination with the "speech" module so that they both handle a greatly simplified @aud auditory memory array. In the new "3G" @aud array, there should not be seven unwieldy and partly unnecessary flags in the flag-panel or "row" of the auditory array. Instead, there should be only flags which have a legitimate basis in the neuronal nature of the auditory memory. There should be the phoneme itself, its quasi-neuronal activation-level, and its conceptual associative tag.

Perlmind Programming Journal (PMPJ) -- details for 2015 May 14:

Now we must cause the speech() module to display each word horizontally and with nothing but the phonemic character showing. How do we read out only one element, the $pho, from each row in the @aud auditory array?

Perlmind Programming Journal (PMPJ) -- details for 2015 May 17:

We need to experiment with "slice" in Perl so that we may break down an engram-row in @aud memory and retrieve individual elements from any row in the @aud array.

Perlmind Programming Journal (PMPJ) -- details for 2015 May 28:

The next module we need to implement is NewConcept, because the AI Mind can not begin to think without making associations among concepts and ideas. In the early AI Steps, all concepts are new concepts.

We need to expand the TabulaRasa routine to include the @psi conceptual array and the @en English lexical array. We do so, and the AI still runs.

In MindForth, NewConcept is called from the AudInput module, which will also call OldConcept when we have coded OldConcept. Now we stub in the NewConcept module, and the AI still runs. Next we add some temporary code to show that new entries are being made in the @psi conceptual array.

Perlmind Programming Journal (PMPJ) -- details for 2015 June 01:

Now that NewConcept has stored data in the @psi conceptual array, next we need EnVocab to be called by the NewConcept module to store data in the @en array for English vocabulary. We will need some temporary, non-permanent code in the AI Perlmind to display data present in the @en array along with the @psi array and the @aud array.

We start using the $nen variable as the "number of the English word." Then we discover that we need to reverse the order of some calls and have AudInput() call NewConcept() first and AudMem() later, so that any new concept is created before its engrams are stored in memory. is a major resource which we are not yet ready to use, but which may be of enormous value later in the further development of AI Perlminds.

Perlmind Programming Journal (PMPJ) -- details for 2015 June 04:

Next we need to stub in the EnParser (English Parser) module, not so much for its own sake, but rather as a bridge to calling the InStantiate module. We have EnParser merely announce that it has been called by the NewConcept module, so that we may see that the AI still runs with no apparent problems. We declare the $bias and $pov variables associated with EnParser so as to make sure that they are not reserved words in Perl. We clean up some lines of code previously commented out but left intact for at least one archival release iteration. We caution here that the EnParser module is extremely primitive and relies upon very strict input formats such as subject-verb-object (SVO), so that EnParser can expect a subject-noun, then a verb, and then an object-noun or a predicate nominative. We are more interested in the demonstration of thinking than in the demonstration of parsing. Perl AI coders may be able to adapt pre-existing CPAN or other parsing code for use with the AI Perlmind.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 10:

After many years of development, Perl6 has finally been released around the beginning of this new year 2016. We now position the emerging AI Perlmind as a killer app for the emerging Perl6 programming language. Yesterday we uploaded the Perl6 AI Manual to the Web for use with both P5 AI and P6 AI.

Apparently both Perl5 and Perl6 will have problems in accepting each single keystroke of input from a human user. Therefore we should shift our AI input target away from immediate human keyboard entry and towards the opening and reading of computer files by the AI Mind. Since we envision that a P6AI will sit quietly on a webserver and ingest both local and remote computer files, it makes sense now to channel input into the AI as a file rather than as dynamic keyboard entry.

Today we have created C:\Strawberry\perl_tests\input.txt as a textfile containing simply "boys play games john is a boy" as its only content. Then we have copied the code-sequence of AudInput() as FileInput() and we have made the necessary changes to accept input from an input.txt file instead of from the keyboard.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 11:

Today we need to figure out how to read in each line of input.txt and how to transfer each English word into quasi-auditory memory.

In the FileInput() subroutine of the source, it looks as though the WHILE loop for reading a file may be running through completely before any individual line of input is extracted for AI processing. We move the NewConcept() and AudMem() calls into the WHILE loop so that each line of input is processed separately. However, not just each line, but each word within a line, needs to be processed separately.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 12:

A line of text input needs to be broken up into individual words. First we learn from the PERL Black Book, page 568, that the getc function lets us fetch each single character in a line from our input.txt file. Therefore in the FileInput() module of we use the "#" symbol to comment out the WHILE-loop that was transferring a whole message "$msg" into AudMem(). Then we use getc in a new WHILE-loop to transfer a series of incoming characters from input.txt into AudMem(), where we comment out the string-reversing and chopping code and we convert a do-loop into a simple series of non-looping instructions, because the looping is being done up in the FileInput() module. We see that the program is now transferring individual input characters into auditory memory. Later we will need to make the transfers stop at the end of each input word, shown by a blank space or punctuation or some other indicator. The new code is messy, but we should upload it to the Web and clean it up when we continue programming.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 13:

In the FileInput() module of the AI we are inserting the call to NewConcept() so that AudMem() will show an incrementing concept number for each word being stored in auditory memory. Uh-oh, running the AI shows that each stored character is getting its own concept number. Obviously, we will have to call NewConcept() only when an entire new word is being stored, not each individual character.

We were able to test for a blank space (probably not enough) after an input word in FileInput(), then order a "return" out of the WHILE-loop. We had to put "{ return }" in brackets to avoid crashing the program. Now the AI loads a first word "boys" over and over into auditory memory, but we have made progress.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 14:

Let us see what happens if we run the Perl AI with no input.txt available for the AI to read. We save input.txt elsewhere and then we delete input.txt from the perl_tests directory. We run the AI program without an input.txt file available, and it goes into an infinite loop. We change the FileInput() code that opens the input.txt file by adding the "or die" function to halt the program and issue an error message. It works and we no longer get an infinite loop. Then we add the input.txt file back into the directory.

Now we need to work on getting the AI to store the first word of input and to move on to each succeeding word of input.

When we inspect the MindForth code, we see that the AudInput module first calls OldConcept at the end of a word, and only calls NewConcept if the incoming word is not recognized as an old concept. So we should create an OldConcept() module in the Perl AI program.

In the FileInput() module, we might just wait for a blank space-character and use it to initiate the saving of the word and the calling of both OldConcept and NewConcept(). Even if everything pauses to store the word and either recognize it or create a new concept, the reading of the input file should simply resume and there should be no special need to keep track of the position in the input-line.

In accordance with the MindForth code, any non-space character coming in should go into AudMem(). An ASCII-32 space character does not get stored, but rather a storage-space of one time-point gets skipped, because MindForth AudInput increments time "t" for all non-zero chararacters coming in. In other words, skipping one time-point in auditory memory makes it look as if a space-character were being stored.

It turns out that time "$t" was not yet being incremented in the AI, so we put an autoincrement into the FileInput() module.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 15:

It is time in to create the AudBuffer() module to be called by AudInput() or FileInput() and VerbGen(). The primitive coding may be subject to criticism, since the module treats a series of variables as a storage array, but the albeit primitive code not only serves its purpose but is easily understandable by the AI coder or system maintainer. For now we merely insert a stub of the AudBuffer() module.

After wondering where to place the AudBuffer() module, today we re-arrange all the mind-modules to be in the same sequence as MindForth has them, so that it will be easier in inspecting code to move among the Forth and JavaScript and Perl AI programs. MindForth compels a certain sequence because a module in a Forth program can call only modules higher up in the code.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 16:

The program is going to get extremely serious and extremely complicated now, because for the first time in about eighteen years we are going to change the format of the storage of quasi-acoustic engrams in auditory memory. We are going to change the six auditory panel-flags from "pho act pov beg ctu audpsi" down to a group of only three: "pho" for the phoneme or character of auditory input; "act" for the activation-level; and "audpsi" for the concept number in the @psi conceptual memory array.

The point-of-view "pov" variable will no longer be stored in auditory memory, and instead other functions of memory will have to remember, if possible, who generated a sentence or a thought stored in auditory memory. Over the years it has been helpful to inspect the auditory memory array and to see whether a sentence came from the AI itself or from an external source.

The flag-variables "beg" for beginning of a word and "ctu" for continuation of a word served a purpose in the early AI Minds but are now ready for extinction. The Perl language is so powerful that it should simply detect the beginning or ending of a word without relying on superfluous flags stored in the engram itself. Removing obsolete flags makes the code easier to understand and easier to develop further.

We should probably next code the EnVocab() module for storing the fetch-tags of English vocabulary, because the @psi concept array will need to direct pointers into the @en array. In MindForth, EnVocab comes in between InStantiate for "psi" concepts and EnParser for English parts of speech. Oh, we already have a stub of EnVocab(). Then it is time to flesh out the module.

First we create the number-flag $num for grammatical number, which is important for the retrieval of a stored word in English or German or Russian. Then we create the masculine-feminine-neuter flag mfn for tracking the gender of a word in the @en English array.

We may now be able to discontinue the use of the fex flag for "fiber-out" and fin for "fiber-in". These flags were helpful for interpreting pronouns like "I" and "me" as referring to the AI itself or to an external person. The Perlmind should be able to use point-of-view "pov" code to catch pronouns or verb-forms that need routing to the correct concept.

We still need a part-of-speech pos flag to keep track of words in the @en array. We also need the $aud flag as an auditory recall-tag for activating engrams in the @aud array, unless it conflicts with the @aud designation and needs to be replaced with something like $rv for recall-vector.

The $nen flag is already incremented in NewConcept(), and now we begin storing $nen during the operation of EnVocab(). Then we had many problems because in TabulaRasa() we had filled the @en English array with zeroes instead of blank spaces.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 17:

In the program we continue working on EnVocab() for English vocabulary being stored in the @en array. Today we create the variable $audbeg for auditory beginning of an auditory word-engram stored in the @aud array. We also create the variable $audnew to hold onto the value of a recall-vector onset-tag for the start of a word in memory while the rest of the word is still coming in. By setting the $audnew flag only if it is at zero, we keep the flag from changing its truly original value until the whole word has been stored and the $audnew value has been reset to zero for the sake of the next word coming in.

Today for a bug in the AI we kept getting a message something like, "Use of unitialized value in concatenation <.> or string at line 295" at a point where we were trying to show the contents of a row in the @en English lexical array. In TabulaRasa() we solved the bug by declaring $en[$trc] = "0,0,0,0,0,0,0"; with seven flags set to zero. Apparently TabulaRasa() initializes all the items in the array.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 18:

In the AI Perlmind, let us see what happens at each stage of reading an input.txt file.

The MainLoop calls sensorium() which in turn calls the FileInput() module. FileInput() goes into a WHILE-loop of reading with getc (get character) for as long as the resulting $char remains defined. As each character comes in, FileInput() calls AudMem() to store the character in auditory memory. Each time that $char becomes an empty non-letter at the end of an input word, FileInput() increments the $onset flag from $audnew and calls NewConcept(), because the AI must learn each new word as a new concept.

NewConcept() increments the number-of-English $nen lexical identifier and calls the English vocabulary EnVocab() module to set up a row of data in the @en array. NewConcept() calls the stub of the English parser EnParser() module. FileInput() calls the stub of the OldConcept() module.

The MainLoop module calls the Think() module which calls Speech() to output a word as if it were a thought, but the AI has not yet quickened and so the AI is not yet truly thinking. At the end of the program, the MainLoop displays the contents of the experiential memory for the sake of troubleshooting the AI.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 19:

The program is ready to instantiate the InStantiate() module for creating concepts in the @psi array of the artificial Mind. Let us change the @psi array into the @psy array so that a $psi variable will not conflict with the name of the conceptual array.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 20:

With we may need to remove the activation-flag from the flag-panel of the @en English lexical array. In the previous Forth and JavaScript AI Minds, we had "act" there in case we needed it. Now it seems that in MindForth only the KbSearch module uses "act" in the English array, and the module could probably use @psy for searches instead of the @en lexicon.

There is some question whether part-of-speech $pos should be in the @psy conceptual array or in the @en lexical array. A search for "6 en{" in the MindForth code of 24 July 2014 reveals that no use seems to be made of part-of-speech "pos" in MindForth. Apparently part-of-speech has already been dealt with during the functions that use the Psi array, and therefore the English array does not concern itself with part-of-speech. So part-of-speech could be dropped from the @en English array.

It looks as though part-of-speech has to be assigned in the @psy array before inflections are fetched in a lexical array. If a person says, "I house you in a tent," then a word that is normally a noun becomes a verb, "to house." The software should override any knowledge of "house" as being a noun and store the specific, one-time usage of "house" as a verb. Then the AI robot can respond with "house" as a verb to suit the occasion: "Please house me in a shed." OldConcept() should not automatically insist that a known word always has a particular part-of-speech. In a German AI, VerbGen() should be called to create verb-endings as needed, if not already stored in auditory memory.

In the @psy concept array we should have seven flags: psi, act, pos, jux, pre, tkb, and seq. If we now change the tqv variable from MindForth to $tkb in the Perl AI, it clearly becomes "time-in-knowledge-base" for Perl coders and AI maintainers.

It suddenly dawns on us that we no longer need an enx flag in the @psy array. We may still need the $enx variable for passing a fetch-value, but it looks like the @psy concept number and the @en lexical number will always be the same, since we coded MindForth to find inflections for an unchanging concept number.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 21:

Now invites us to make a drastic simplification by merging the @psy array and the @en array, because any distinction between the two arrays has gradually become redundant. The @psy array has psi, act, pos, jux, pre, tkb, seq flags. The @en array has nen, num, mfn, dba, rv flags. We could joint them together into one @psy conceptual array with psi, act, pos, jux, pre, tkb, seq, num, mfn, dba, rv flags.

The first thing we do is in TabulaRasa(), where we fill each row of the @psy array with eleven zeroes for the eleven flags. Next we have the InStantiate() module store all eleven flags in the combined flag-panel. We run the Perl AI and it makes no objections. Then we have InStantiate() announce the values of all eleven flags before storing them.

In the flag-panel of the @psy array, we should probably add a human-language-code "hlc" so that an AI can detect English or German or Russian and think in the indicated language.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 22:

In where we have merged the @en array into the @psy conceptual array, we gradually need to eliminate the $nen variable. However, we need a replacement other than the $psi variable so that the replacement variable can hold steady and wait for each new word being learned in English, German, Russian or whatever human language is involved. Let us try using $nxt as the next-word-to-be-learned.

2016 January 23:

In we are now trying to code the AudRecog() module taken from MindForth, although the timing may be premature.

As we began coding AudRecog() in the AI, we discovered that the primitive EnBoot() sequence did not contain enough English words to serve as comparands with a word being processed in the AudRecog() module, so we must suspend the AudRecog() coding and fill up the EnBoot sequence properly before we resume coding AudRecog().

Today we rename the English bootstrap EnBoot() sequence as MindBoot() because the Perl AI with Unicode will not be limited to thinking only in English, but will eventually be able to think also in German and in Russian.

2016 January 24:

In we are replacing the Think() module with EnThink() for English thinking, and we are declaring DeThink() as a future German thinking module and RuThink() as a future Russian thinking module.

Coding the AudRecog() module in Perl5, we move left-to-right through the nested if-clauses. At the surface we test for a matching $pho. Nested down one layer, we test for zero activation on the matching $pho, because we do not want a match-in-progress. At the second depth of nesting, we test for the onset-character of a word. In the previous AI Minds coded in Forth and JavaScript, we still had the "beg(inning)" flag to fasten upon a beginning character at the start of a comparand word in auditory memory. Now in Perl killer app AI we must rely on the $audnew variable which is set during FileInput() but which we have apparently neglected to reset to zero again. Let us try setting $audnew back to zero just before we close the audinput.txt file. Oh no, $audnew won't work here, because $audnew applies only to the beginning of an input word, not to the beginning of a word stored in memory. Maybe we can try testing for not only a zero-activation matching $pho but also for an adjacent blank space.

Now, we are going backwards in memory from space-time $spt down to $midway, which is set to zero in the primitive AI. The $i variable is being decremented at each step backwards. We would like to know if going one step further encounters the space before a word. We might have to start searching forwards through memory if we want to trap the occurrence of an initial character in a stored word. If we go forwards through memory, we could have a $penult variable that would always hold the value of each preceding moment in time. For the chain of activations resulting in recognition, it should not matter if the sweep goes backwards or forwards.

2016 January 25:

Now in we will stop searching backwards in AudRecog() and search forwards so that it will be easier to find the beginning of a comparand word stored in auditory memory.

As we debug the AI, we notice that the MindBoot sequence is not a subroutine, as EnBoot was in the previous AI Minds. We should call MindBoot() as a one-time subroutine from the MainLoop. We establish TabulaRasa() and MindBoot() as subroutines and we give them a one-time call from the MainLoop.

Throughout many tests we were puzzled because AudRecog() was not recognizing an initial "b" at zero activation preceded by a zero $penult string. Finally it dawned on us that the MindBoot() "BOY" was in uppercase, so for a test we switched to lowercase "boy", and suddenly the proper recognition of the initial character "b" was made. But we will need to make input characters go into uppercase, so that AudRecog() will not have to make distinctions.

2016 January 26:

Moving into, we need to consult our Perl reference books for how to shift input words into UPPERCASE. The index of Perl by Example has no entry for "uppercase". None also for "lowercase". However, the index of the Perl Black Book says, "Uppercase, 341-342," BINGO! Mr. Steve Holzner explains the "uc" function quite well on page 341. Let us turn the page and see if we need any more info. Gee, page 342 says that you can use "ucfirst" to capitalize only the first character in a string sentence -- one more example of how powerful Perl is. Resident webserver superintelligence, here we come.

Now let us try to use the "uc" function in the free Perl AI source code of as we continue. We had better look into the FileInput() module first. Hmm, let us go back to the index of Perl by Example, where in the index we find "uc function, 702." Okay, let us try using "uc $char" at the start of the input WHILE-loop in the FileInput() module. Huh? It did not work. Uh-oh. Houston, we have a problem. Our mission-critical Perl Supermind is stuck in lowercase. Here we have been trying to learn Perl, but we have never coded any Perl program other than artificial intelligence. Even our very first "Hello world" Perl program was a "" program and we never did any scratch-pad Perl coding. Meanwhile there are legions of Perl coders waiting for us to finish the port of AI Minds first into Perl5 and then into Perl6. Let us check the Perl Black Book again. Let us try, $char = "uc . $char"; in the FileInput() module. We drag-and-drop the line of code from this journal entry straight into the AI code. Then we issue the MS-DOS command, "perl" and take a look. Oh no, the "uc" itself is going into memory as if it were the input. Hey, it finally worked when we used $char = uc $char; as the line of code. Now the contents of auditory memory are being displayed in uppercase. We can go back to coding the AudRecog() module.

2016 January 27:

Although we have done away with the ctu-flag of MindForth in the Perl AI, because we want to reduce the number of flags stored in the @aud auditory memory, in AudRecog() we may create a non-engram "ctu" or its equivalent by using the split function to look ahead one array-row and see whether a stored comparand word continues beyond any given character.

2016 January 28:

In we would like to have the FileInput() module call the human-computer-interaction AudInput() module if the input.txt file is not found. In that way, we can simply remove input.txt to have a coding session of direct human interaction with the AI Perlmind.

2016 January 29:

In we are continuing to improve AudInput() towards equal functionality as we developed in FileInput().

The line of input goes into a $msg string, which AudInput() needs to process in the same way as FileInput() processes the input.txt file, except that AudInput() only has to deal with one line at a time, which is presumably one sentence or one thought at a time.

2016 January 30:

Today in we hope to fix a problem that we noticed yesterday after we uploaded to the Web. We had carefully gone about sending the input $pho (phoneme) into AudMem() and AudRecog(), but no word was being recognized in AudRecog() -- which we had coded for six hours straight three days ago. Then yesterday we saw that we had left a "Temporary test for audrec" in the AudMem() module and that the code was arbitrarily changing the $audpsi from any recognized $audrec concept to the $nxt (next) concept about to be named in the NewConcept() module. Now we will comment out that pesky test code and see if AudRecog() can recognize a word. Hmm, commenting out the code did not seem to work.

We hate to debug the pristine Perl AudRecog() by inserting diagnostic message triggers into it, but we start doing so, and pretty soon we discover that we neglected to begin AudRecog() with the activation-carrier $act set to eight (8), as it is in the predecessor Mindforth AI. So let us set $act to eight in the AudRecog() Perl code and see what happens. Uh-oh, it still does not work.

But gradually we got AudRecog() to work. Now in the AI we are working on the AudMem() module. We want it to store each $pho phoneme in the @ear array as $audpsi if there has been an auditory recognition, and as simply $nxt if only the next word from NewConcept() is being stored.

The $audpsi shall be stored if the next time-point is caught by the ($nxr[0] !~ /[A-Z]/) test as not being a character of the alphabet.

2016 January 31:

Yesterday we got the Perl AI to either recognize a known word and store it in the @ear array with the correct $audpsi tag, or instead to store a word as a new concept with the $nxt identifier tag. However, in the @psy conceptual array, the Perlmind is improperly incrementing the $nxt tag because we have not yet figured out how to declare that a character flowing by is the last character in a word. Bulbflash: Maybe we can store the $nxt tag at the end of each @ear row, erasing it when each successive character comes in, so that only the last letter of the word will end up having the $nxt tag.

2016 February 01:

Yesterday we uploaded the January 2016 PMPJ material to the Cyborg weblog as a blog-post. Today in we are continuing to port the Strong AI Minds from Forth and MSIE JavaScript first into Perl5 and then hopefully into the new Perl6.

When we type a multiword sentence into the primitive Perl AI, we notice that only the first word is currently being stored in the @ear auditory memory array. We type in "boys play games" but we see only the word "BOYS" in the memory array. Obviously something needs to be improved in the AudInput() module for auditory input, where we see a do-while loop that stops when it encounters a blank space in the user input. We need to process the entire sentence of user input.

2016 February 02:

In yesterday we made it possible for an outer loop in the AudInput() module to read in an entire sentence of user input while the inner loop deals with individual words in the sentence by sending each character into the AudMem() module. AudMem() sends each word character by character into the AudRecog() module for auditory recognition of known concepts or for creation of a new concept if an input word is not recognized by the AI.

Now we need to straighten out the proper assignment of concept-tags in audition for both known and previously unknown concepts (words). When we run the AI, we see that it takes in one whole sentence but only the first word of each additional sentence. We suspect that the problem is that we have not zeroed out the $eot variable after the outer loop of AudInput(). We set $eot back to zero and see what happens. The AI starts taking in one whole sentence after another, with the result that we run out of auditory memory to hold all the sentences. We also noticed that the AI was not properly incrementing the $nxt next-new-word variable for each unknown word in a sentence.

First we need to look and see where the $nxt counter supposedly gets incremented. Hmm, the free AI source code uses $nxt++ only in the NewConcept() module. Why does it not happen for the next new concept after each previous new concept? We put a diagnostic message into the NewConcept() module, run the Perl killer app program, and the message does not even appear, so maybe NewConcept() is not being called.

AudInput() is supposed to call the NewConcept() module. Oh, but the code to call NewConcept() has been left out of the inner loop for single words, such as previously unknown words (new concepts), and so the code is not doing anything immediately after the outer AudInput loop. Let us put the NewConcept-calling code back into the inner loop. We do so, and now $nxt is being incremented too rapidly. Oh, we put the NewConcept-calling code too deep inside the inner loop. The calling code must be inside the outer loop, but also outside of the inner loop. Let us make the change and see what happens. Well, the AI is no longer incrementing $nxt too rapidly, but there is still a problem. The $nxt variable is being incremented only once for each new word, but the resulting concept-number is not being properly assigned at the end of each new word in auditory memory.

In the AudRecog() module, we should try to get away from using the $penult flag to detect an engram preceded by a blank space. Let us try using $prv to split up the @ear row and look at the preceding engram.

2016 February 03:

We have been working fruitlessly on the AudRecog() module and now we suspect that no activation is being imposed inside the flag-panel of the @ear auditory memory array. Therefore we shall take our most recently uploaded and save it as for further work.

After some experimentation we are learning how to get AudRecog() to impose activation on an engram in the @ear auditory array. We do not want these activations to persist over time, so now we create the AudDamp() module derived from the Mindforth AI.

2016 February 04:

In we continue to troubleshoot the AudRecog() module. Today we are trying to make sure that the module creates chains of non-initial activation for the achievement of conceptual recognition at the end of any unbroken chain. When we type in "boy sees all", we notice among the diagnostic messages that the AI is claiming to increase non-initial activation for the phoneme "O" at both t=4 amid "ERROR" and at t=14 amid "BOY". The "O" in "ERROR" should not be receiving any activation. When we investigate, we say that the diagnostic message is a misnomer, and we change it to a question asking if it is time to "INCREASE NON-INITIAL ACTIVATION?"

Then we have a problem because AudRecog() seems to recognize the input word "BOY" quite well at the t=19 time-spot where "BOY" comes to an end in auditory memory, but the AudMem() module continues to the t=20 time-spot and diagnostically claims to be storing concept number "589" (for "BOY") a second time. Oh, Mindforth stores the "audpsi" and then immediately zeroes out the "audpsi". Let us try the same thing in the Perl AI AudMem() module. It does not work; it somehow prevents the basic recognition of each word.

2016 February 05:

Today in we are concerned with setting the $audpsi tag after a word of input has come in. We have renamed the program as for several reasons. Although is indeed an AI Mind, we want there to be many Perlmind instantiations, of which is just an early one, struggling with all the other Perlminds for survival of the fittest. We also want the user interface to be a dialog between "Human" and "Ghost", not between the "Human" and "Robot" of the MindForth AI. A Perl AI running on a webserver is likely to resemble "the ghost in the machine" more than a robot.

2016 February 06 -- A Simple Bugfix

In the AI program today we attempt a minor bugfix -- a major event because up until now the primitive artificial intelligence in Perl was not robust enough to have any minor problems. The purported Perl killer app has only four words -- ERROR, A, ALL, BOY -- in its MindBoot sequence, because we want to see a non-scrolling, easily viewable screen of data when we run the AI to its Perl-die end for the sake of troubleshooting. Recently we improved the AudMem() module so that it stores engrams well in auditory memory, but we notice a bug in the process of auditory recall. When we type in, "A boy sees all," the Ghost program assigns the following concept-numbers: A [101] BOY [589] SEES [101] ALL [123]. It is incorrect for the AI to assign "101" to the unknown word "SEES". Let us try to understand and fix whatever went wrong.

As we inspect the diagnostic messages during the input, we observe that a false value of "101" for both "provisional recognition" $prc and $monopsi are being carried over, and $audrec for "auditory recognition" is holding the same "101" value which belongs to "A" and not the unknown input-word "SEES", which should be receiving a $nxt new-concept value of "900". These erroneous values were probably being assigned during the earlier processing of the article "A" at the start of the input sentence. Certain flags were not being zeroed out properly when they should have been.

In the auditory recognition AudRecog() module, we see that $monopsi gets its value from $prc, so we should look into the provisional recognition $prc as potentially the source of the problem. In AudRecog(), $prc gets its value from the "$aud[2]" concept-number of any single-sharacter word coming in, such as "A" in this case or "I" in other situations. The current AudRecog() code does not seem to zero out the $prc variable anywhere. Let us see if the ancestral Mindforth code zeroes out prc anywhere. Hmm, not even MindForth zeroes out prc. Let us see if the JavaScript AiMind.html zeroes out prc. Well, the JSAI zeroes out prc in both AudMem() and AudInput(), so the Ghost AI in Perl should get with the program. But we should not zero out $prc while program-flow is still in AudRecog(). No, the $prc variable is apparently meant to carry its value over into AudMem(), where it can at least provide a word-stem, especially for a highly inflected language like Russian. Since we are not yet using $prc in the Perl AudMem() module, let us zero out $prc in the AudInput() module. Now let us run the Ghost AI again and see what spooky things occur. We enter "A boy sees all" and we get the following correct concept numbers: A [101] BOY [589] SEES [900] ALL [123]. The unknown word "SEES" obtains the concept number "900" from the $nxt variable which is set to "900" at the end of the MindBoot() sequence and which will be incremented for each new word being treated as a new concept by the Ghost AI -- the ghost in the machine.

Unfortunately, there is still a bug. We type in a sentence and we get the following concept numbers:
I [900] SEE [901] A [902] BOY [589]. The "A [902]" assignment is not correct. Who're you gonna call? Let us call in some Perl AI ghostbusters.

Hmm, let us run the previous with the same input. We do, and we encounter the same glitch. After long troubleshooting, we discover that the $audrun variable needed to be reset to zero after each inner loop in the AudInput() module. Doing so fixed the problem of only getting one recognition of the "A" monopsi, but there is still a problem with $monopsi being set prematurely in AudRecog(), probably for lack of testing to see if the next space is a non-character.

2016 February 07:

Today in we may go back to searching backwards through auditory @ear memory in the auditory recognition AudRecog() module, because for lack of massive parallelism (MasPar) we will want to accept the first and most recent recognition of a word. We have looked back over the Perl Mind Programming Journal (PMPJ) to find that we were making the change to forward searches around 24 January 2016. From we took the code "for (my $i=$spt; $i>$midway; $i--) {" and we inserted it into the AudRecog() module, where we commented out the line of code for forward searches. Then we ran the Ghost AI program and it worked perfectly well, even with the search changing from forwards to backwards. Apparently, our use of the "split" function to look at previous and next rows of memory in the @ear array made it not matter whether we were searching forwards or backwards. However, now that we are searching backwards again, we may have to use the Perl word "last" to exit from the loop of the search when we have found the most recent recognition of an old concept.

The AI does not really find the most recent recognition of a known word until the search centers upon the very last character of the input word. If we have an eight-letter word coming in, the entire @ear array has to be searched for the first seven characters. We would only want to leave (as in Forth) or break (as in JavaScript) from the search-loop when the most recent result is all we need. Furthermore, as the AI ghosts get more sophisticated, we may want the found results to be as close in memory as possible to a conversation going on in real time. We might not merely be recognizing the word; we might also be reactivating the ideas using the word at the time of its retention in memory.

Now in we will try to increase the MindBoot sequence.

2016 February 09:

Four days ago we finished debugging the AudRecog() module, and yesterday we completed the transfer of the English bootstrap from the JavaScript AiMind.html into the MindBoot() sequence of the Webserver Ghost AI. Now we notice that a new word does not show a recall-vector "$rv" tag in the Tutorial display, so we should troubleshoot the problem.

2016 February 10:

Today with we want to move beyond the completion of the MindBoot() sequence on a par with the JavaScript AiMind.html and we want to code the modules that comprehend human language input. When we insert a diagnostic message into the EnParser() module and run the Ghost AI, the message does not make any appearance. In the free AI source code, we see that EnParser() is called by NewConcept(), but it also needs to be called by OldConcept().

Because we have merged the lexical array of MindForth into the conceptual @psy array of the Perl AI, we may have to bypass at first and then remove the EnVocab() module from the AI Perlmind. The Perl AI will have to use the InStantiate() module to store the linguistic flags for English words.

The OldConcept() module now searches backwards in time to find the most recent engram of an $oldpsi concept. Then we use the Perl last function to exit from the search-loop in order to report the found data for the InStantiate() module.

2016 February 11:

In we would typically start coding by asking what is the most crying need, but nothing shouts out right now for debugging or coding. We perhaps ought to take care of the remaining associative tags in the @psy conceptual array, but we would like to code something more daring. Let us try to code some parts of the thinking machinery.

2016 February 12 -- Rudimentary NounPhrase() Code

As part of the thinking machinery in we will try now to fill in the NounPhrase() stub with some functioning code. In advance we know that we will have to add code to InStantiate() in order to impart functionality to the thought-generation modules.

We begin fleshing out NounPhrase() by having the module search for the most active concept in the @psy concept array. No such active concept is found, because first we have to establish code that lends initial activation to any concept talked about by a human user. Accordingly we assign a value to the $act variable in the EnParser() module and that same value gets stored with concepts in the InStantiate() module. In the NounPhrase() module we temporarily search for any concept with an activation higher than zero, and the module begins to report the finding of a $motjuste concept.

2016 February 14:

Today we are experimenting with UTF-8, because we want our Perl AI to think in Russian, along with English and German. We also want to make it easier for other Perl programmers to create AI Perlminds in other foreign languages beyond Russian, but using the Unicode in Perl that we use for Cyrillic characters in Russian.

With an otherwise abandoned version of the Perlmind, we rename the FileInput() module as RuFileInput() and we create an input.txt file with Russian characters in it. At first the AI goes into a seemingly infinite loop, and we have to press Control-C to stop it, which shuts down the MS-DOS window. We start the AI back up, but we insert some "die" breakpoints one after another in the new RuFileInput() module, so we can see what the AI does with Cyrillic input. It calls AudMem() and tries to store a character that is definitely not Cyrillic. We had better consult our various Perl books and webpage print-outs to learn how to "encode" the incoming Cyrillic characters from the input.txt file.

The program complained about an "undefined" subroutine &Main::encode until we put "use Encode" in the heading.

We started using open (my $fh, "<:utf8", "input.txt") on the Cyrillic input.txt file and now AudMem() was trying to store one recognizeable character followed by something unrecognizeable.

2016 February 15:

Yesterday we had a version of the Perlmind open a Cyrillic input.txt file and store each character through AudMem() in the @ear auditory memory, but the nonsense Russian characters being stored were not the same characters as we had Alt-Shift typed into our Russian input.txt file. Today we plan to try a few more diagnostic efforts, such as changing the Cyrillic input.txt to contain only one single Russian letter, typed twenty or thirty times. As we cycle through a few letters of the Russian alphabet, we may get a clue as to what the RuFileInput() module is doing with the supposedly Cyrillic input.

We also plan to store the Cyrillic input.txt file somewhere else and delete it from the Perl directory, so that the RuFileInput() module will first try to open the text file and then call RuAudInput() instead, so that we can work on entering Cyrillic directly through the keyboard. We know that in early versions of the Perlmind we were able to type in Russian and see it stored as Russian in the auditory memory. Our real aim is to get the Russian portion of the MindBoot() sequence working properly so that the AI can recognize Russian words and think in Russian.

When we make the Cyrillic input.txt file consist of only one particular Russian letter used several times, our AI stores only one character but the wrong one, and in concept-groupings consistent with the length of the quasi-words in the input file. Somehow the RuFileInput() module is not reading in the Cyrillic characters as we intended them to be read in.

We make some progress when we delete the Cyrillic input.txt file so as to make RuFileInput() give up and switch to RuAudInput(). There at first we use $msg = decode_utf8() and we do not get the correct Russian letters during AudMem() processing. When we switch to $msg = encode_utf8() suddenly the Russian characters show up loud and clear, although they are apparently not becoming $pho and they are not being stored in the @ear auditory memory.

By a fluke we have discovered that one experimental program shows us a code for any Russian letter entered by us, and our more advanced program, when we store the indicated code in the MindBoot(), renders the Russian letters without ancillary symbols as the "Ghost: " output.

2016 February 16 -- Using Perl Unicode to Display Russian

Yesterday we finally made the Perl AI program display Russian Cyrillic characters without a symbol to the left of them, but we do not yet understand how the Unicode UTF-8 works to display Russian characters. Today we hope to isolate and grok the code that works.

From the experimental version we have removed snippets of code and then tested to make sure that the AI still shows Russian letters without extraneous symbols. Eventually we get down to the EnThink() module, which we notice is calling the Speech() module twice in order to simulate thinking. During the first call, Speech() says "ERROR" in English, because no time point has been given for the start of speech and the word "ERROR" is found at the default beginning point. During subsequent calls from EnThink() to Speech, the time-point for some Russian test vocabulary is given, and so we see some Russian words. But the Perl program seems to need at least one encoding in the format of \N{U+0} to prevent the extra symbols from being displayed with the Cyrillic characters, so we use one such encoding in advance of our intended first Russian word.

2016 February 17 -- Apportioning Time for English, Russian and German in MindBoot

The Perl AI currently uses 651 time-points for the English boot sequence. The Russian Dushka AI uses 577 time-points. The German Wotan AI uses 1215 time-points. Perhaps in Perl we should encode the Russian bootstrap starting at t=1001 so as to leave space for expansion of the English bootstrap and so that the German bootstrap may start around t=2001 and go beyond t=3000.

2016 February 18:

Today we need to see first if AudMem() in the Perl AI properly stores Russian Cyrillic characters, and secondly if the as-is AudRecog() module can recognize Russian words, or if AudRecog() needs modification to handle Russian Cyrillic.

Oh gee, currently AudMem() does a lot if filtering to make sure that an auditory engram is in the [A-Z] range.

2016 February 19:

When we do a comment-out of # binmode STDIN, ":encoding(UTF-8)"; near the start of the AI Perlmind, we stop getting the message, utf8 "\x9F" does not map to Unicode at line 389. with the Alt-Shift keyboard input of the capital letter "A" in Russian. Instead we get the Russian letter with a subscript "T" symbol to the left of the Cyrillic "A".

We have already been using "reverse" on the input in the AudInput() module, in order to "chop" off each character of input from the first to the last for transfer to the AudMem() module. Now that Russian characters are coming in with a four-character coding like "\x9F" for capital Russian "YA", we try using the substr function to glom onto the four characters designating the one Russian character. However, our diagnostic messages inserted into the AudInput() module reveal that even the hex code has undergone the "reverse" function and "\x9F" has turned into "F9x\" as a value of the $pho(neme) variable. So we may have to "reverse" the $pho variable.

2016 February 20:

In our Russian AI progress, we have gotten Russian words stored in the Perl MindBoot() to appear on screen as Cyrillic characters, and we have gotten Russian from the Windows XP keyboard to go through AudMem() into auditory memory where it gets displayed in Cyrillic characters during a diagnostic read-out. Next we need to see if the Perl AudRecog() can recognize Russian words just like the Dushka JavaScript AudRecog().

We may try now to see if the AudInput() module can immediately detect Russian input and set the $hlc human language code to "ru" so that both AudInput() and AudMem() may treat English and Russian input differently.

Our Mentifex LinkedIn profile has been updated today to convey central facts about the Perl "Ghost" AI project.

Today we created thirty-three tests for Russian capital letters to set the human-language-code $hlc to "ru" for Russian. Then we got a brighter idea and we coded thirty-three new tests to not only convert lowercase to uppercase but also to set the human-language-code $hlc to "ru" for Russian.

Since we see diagnostically that our AudMem() module is now sending Russian characters into AudRecog() but not getting any $audpsi recognitions, we may need to create a RuAudRecog() version of AudRecog() and use it to recognize Russian words when the human-language-code $hlc is set to "ru" for Russian.

2016 February 21:

Today in, perl objects to our use of if ($prv[0] =~ /["\x**"]/) in the RuAudRecog() module, stating that the illegal hexadecimal digit "*" is being ignored by perl, when we try to use "*" as a wildcard. At first we go along with the mandates of perl, but we discover that RuAudRecog() stops recognizing a Russian word.

2016 March 03:

Now that with the basic MindBoot() sequence has been filled in for both English and Russian, it is time to code the actual Strong AI mechanisms taken from MindForth and the JavaScript AiMind.html program. We may hold off on coding the German portion of the AI for a while, because the Russian artificial intelligence is rather difficult to program.

2016 March 04:

Our main question in is how early the RuAudRecog() module will declare a provisional recognition ($prc), which we need to recognize the stem of a Russian verb. In the Dushka Russian AI, "prc" is potentially set in two places, first in the early code for really short words of one or two characters, and second in the general code for all word-recognitions.

We may be able to do our first significant AI coding after finishing the MindBoot() by making the Ghost AI able to recognize a Russian or English verb based on its stem. To do so, we have to make sure that the $prc variable for "provisional recognition" is working, and we have to tweak the memory-insertion apparatus to make sure that not only the final character of a stored word carries an $audpsi tag, but also the preceding characters which include the end of the word-stem.

Hmm, MindForth does not seem to have the audpsi grouping mechanism. Let us see if the German Wotan AI has it. Oh, the Wotan AudMem module does indeed have the prc-audpsi mechanism. And the German verb "verstehen" ("understand") has three audpsi numbers in a row in the Wotan bootstrap sequence.

2016 March 05:

Yesterday we made the AI able to recognize the stems of bootstrap Russian verbs by means of the $prc "provisional recognition" variable. We had to enhance the verbs in the bootstrap by making sure that there was a concept-number not only at the end of the word, but also next to each auditory engram from the final character of the verb-stem up through the end of the word. Today we need to make sure that new Russian verbs get stored by the Perl AI in the same format -- with concept-numbers stored from the end of the stem to the end of the verb. Let us get to work and then come back and report our success or failure.

Although the Dushka Russian AI in JavaScript uses the AudInput() module to attach concept-numbers to Russian verbs from the end of the stem to the end of the word, in the Perl AI we should try to achieve the same purpose in the RuAudMem() module. There we insert the necessary code and we begin to see both old and new Russian verbs recognized in their various forms other than the infinitive.

Now we will also stub in a Motorium() module and a Volition() module to make it clear that the Ghost Perl Webserver Strong AI will be able to set free-will goals for itself and operate a robot body to achieve those goals. We also had to stub in a module for physiological Emotion() to influence AI thinking.

2016 March 10:

For thinking in English with the AI, let us see if we can concatenate the $output string. But first we should change the Speech() module from its original form as of May of 2015. We should move through auditory @ear memory and obtain each $pho from the initial position in each auditory engram.

2016 March 12:

Today in we are implementing the setting of the $pre tag in the InStantiate() module. If a noun or pronoun comes in with a pos=5 or pos=7, then $prevtag is set to the $psi value for the next go-around, assuming that a verb will come in and acquire the $psi value as its $pre associand.

2016 March 13:

Yesterday we worked on setting the $pre flag when the InStantiate() module relates an incoming word to the previous concept associated with it, such as a noun or pronoun as the subject of a verb, or a verb as a concept in the $pre relationship to a direct object. Today in we would like to set the $seq tag for what verb follows a subject, or for what direct object follows a verb.

2016 March 18:

In the parser module, we use $tsj (time-of-subject) to go directly to the subject noun or pronoun and to "perl-split" it in order to have access to the entire @psy $tsj flag-panel, with such values as $pos and $dba etc.

In the special case of one or more consecutive prep-phrases (prepositional phrases), we need some sort of mechanism to prevent the parser from seizing the prep-phrase noun and declaring it as the $pre (or subject) of the verb. We could increment a variable to count the number of consecutive prep-phrases, and use the counted value to prevent the parser from declaring a subject $pre -- oh, wait. It might be better to have the parser look for the first occurring noun as the subject-$pre but to not designate as subject any noun preceded by a preposition. We could have code such as

if ($pos == 6) { # 2016mar18: after a preposition...
# 2016mar18: put the next noun on hold.
} # 2016mar18: end of test for a pos=6 preposition
The parsing mechanism for finding a noun as the $pre of a verb will no longer have to search backwards in time for a noun if the presumed subject-noun has already been captured as a $tsj (time-of-subject) value. We just need to prevent the designating of a $tsj for any noun that is only the object of a preposition and not the subject of a verb.

We could have code that would seize a $tprep (time of preposition) and use it upon encountering a noun to "split" the $tprep and insert the noun as a $seq, while inserting the preposition as a $pre for the noun. Or should we use the times rather than the concept-numbers?

If the parser holds onto the $tsj time-of-subject as the future recipient of a $seq-verb and for the future insertion as a $pre-subject for the verb, then really no prepositional phrases, however numerous and however concatenated, will need to employ a "skip" mechanism. The very use of specific time-points for designating the subject-noun and the verb and the indirect or direct object will automatically skip over any intervening verbiage, such as prepositional phrases and adjectives and adverbs.

2016 March 19:

In we made the new parser module able to deal with a prep-phrase (prepositional phrase) at the start of a sentence such as, "In chess a boy plays games." Now in we address prep-phrases intervening between subject and verb as in, "A boy in chess plays games."

2016 March 21: -- Detecting indirect and direct objects

With the new parser module we would like now to handle input with both indirect and direct objects, so that the Strong AI will assign the correct associative tags. However, the @psy conceptual array does not contain an indirect-object tag in its flag-panel for each row in the array. The $seq flag on a verb points to the direct object, but nothing points to an indirect object. Perhaps no such flag is actually necessary, but we should keep in mind the idea of adding it. We could perhaps use the pre-existing $jux flag, but a conflict could arise between the adverb "not" and an indirect object, as in a sentence like, "I do not give you anything." Maybe we could treat the indirect object as a $seq in relation to the direct object, as if "I give you something" means "I give something to you."

For the actual mechanics of detecting indirect objects, we could take the rather obtuse method of letting the first noun after a verb go by default into the classification of an indirect object, to be repudiated if no further noun comes in. Such an approach would work especially well with an input like, "I will show you." However, we could also treat the "you" in "I will show you" as a genuine direct object in English, if we expand the cognitive meaning of "show" to include the idea of "make an impression upon." Of course, in German or Russian the idea of "show" would still use the dative case for "you".

We certainly do not need to create a class of verbs that pertain specifically to indirect objects, because almost any verb can take on that role, as in for example, "I will build you a house."

No, a best approach might be to let an incoming second noun acquire the status of a direct object while "demoting" the first noun to the status of an indirect object.

Suppose we set $tio and $tdo simultaneously when a verb is followed by a first noun. If a second noun comes in, we can test $tio for being non-zero or positive, and use the positive test to change the $tdo to the newer noun. The $tio can remain as the time of the indirect object.

2016 March 26:

Today in we are thinking of introducing $iob (indirect object) as a new associative tag in the flag-panel for each row in the @psy conceptual array. We are concerned or worried that we need such an indirect-object tag for verbs if the Perl Strong AI is going to have the ability to re-generate a sentence stored in conceptual and auditory memory.

First we declare the $iob variable and we run the AI to make sure that Perl does not object to the name of the variable.

Then in TabulaRasa() we add one more zero to the @psy array. The AI still runs.

Then in KbLoad() we include $iob as the seventh seal. Ghost still runs, just like a Swedish movie.

2016 March 27:

In the prevous AI Minds, there were only two "pov" points of view: internal and external. Now in the Perl AI we will try to set up three points of view: 1) self; 2) dual; 3) alien. The self=1 pov is for the ego thinking as "I". The dual=2 pov is for another mind talking as "I" into the "you" of the AI. The alien=3 pov is for when the AI is using FileInput() to read a text that contains pronouns like "I" and "you" without letting them be re-interpreted as part of the "dual" situation.

Where in the AI program should the incoming word "707=YOU" be changed into the "701=I" concept? It should be somewhere between and including the AudInput and InStantiate modules. Oh, during AudInput we should set the $pov flag to the value of "2" for "dual" mode. Likewise, during thinking we should set the flag to "1" for "self" mode.

When we try to set up two conditionals to make the switch in the InStantiate module, the second conditional simply reverses the switch made in the first conditional. Maybe we should try using the OldConcept module.

In the OldConcept module, since $oldpsi turns into $psi, we base our conditional on the unchanging $oldpsi and so the second conditional does not reverse the first conditional.

if ($pov == 2) { # 2016mar27: during a pov "dual" conversation...
if ($oldpsi == 707) { $psi = 701 } # 2016mar27: interpret "YOU" as "I";
if ($oldpsi == 701) { $psi = 707 } # 2016mar27: interpret "I" as "YOU".
} # 2016mar27: end of test for other person communicating with the AI.
Then we have another problem because incoming 707=YOU is stored as 701=I but the auditory recall-vector still goes to the "YOU" engram in auditory memory. We have had this problem in the past with MindForth or with one of the other AI Minds.

2016 March 29:

Today in we are trying to get the AI to respond to user input with something resembling a grammatical English sentence. So first we must see if EnVerbPhrase() calls a direct object. Yes, it does, but first we must establish the system of using $subjectflag and $dirobj as flags during thinking. Still the AI does not output a direct object, so we need to check and see if OldConcept() is imparting activation to concepts mentioned during user input.

Although MindForth sets activation during EnParser, we might as well set the activation more forthrightly during InStantiate().

*****2016.MAR.27.Sun. --

In the prevous AI Minds, there were only two "pov" points of view: internal and external. Now in the Perl AI we will try to set up three points of view: 1) self; 2) dual; 3) alien. The self=1 pov is for the ego thinking as "I". The dual=2 pov is for another mind talking as "I" into the "you" of the AI. The alien=3 pov is for when the AI is using FileInput() to read a text that contains pronouns like "I" and "you" without letting them be re-interpreted as part of the "dual" situation.

Where in the AI program should the incoming word "707=YOU" be changed into the "701=I" concept? It should be somewhere between and including the AudInput and InStantiate modules. Oh, during AudInput we should set the $pov flag to the value of "2" for "dual" mode. Likewise, during thinking we should set the flag to "1" for "self" mode.

When we try to set up two conditionals to make the switch in the InStantiate module, the second conditional simply reverses the switch made in the first conditional. Maybe we should try using the OldConcept module.

In the OldConcept module, since $oldpsi turns into $psi, we base our conditional on the unchanging $oldpsi and so the second conditional does not reverse the first conditional.

if ($pov == 2) { # 2016mar27: during a pov "dual" conversation...
if ($oldpsi == 707) { $psi = 701 } # 2016mar27: interpret "YOU" as "I";
if ($oldpsi == 701) { $psi = 707 } # 2016mar27: interpret "I" as "YOU".
} # 2016mar27: end of test for other person communicating with the AI.
Then we have another problem because incoming 707=YOU is stored as 701=I but the auditory recall-vector still goes to the "YOU" engram in auditory memory. We have had this problem in the past with MindForth or with one of the other AI Minds.

2016 April 01:

Today in ghost121.html we need to improve the mechanisms for thinking in Russian so that users may see how Russian personal pronouns are properly interpreted as referring either to the self of the AI or to the human user interacting with the AI.

2016 April 03:

In we want RuVerbPhrase() to call VerbGen() for a missing verb-form. Inside VerbGen(), we need a new way to terminate the DO-WHILE loop, because in the Perl AI we are no longer using a continuation variable to tell us when a stored word is at an end. So, we run the loop "while" the $abc (AudBuffer transfer character) is not equal to a blank space, that is, as long as the characters in the word continue, before the end of the stored word.

2016 April 04:

When we type in "You bug me" in Russian with, the AI with VerbGen() is not properly changing the verb-form in response, apparently because the RuAudMem() module is not attaching an audpsi tag far back enough along the verb to cover the last character of the verb-stem.

2016 April 05:

In RuAudRecog there should perhaps be two initial searches, one for a matching character as the start of a stored word, and subsequent other searches for a matching activated character. Actually, there should be only one search per incoming character, but with varying conditional tests. For the first incoming character under the audrun=1 condition, there should be a check for something like a blank space before the stored engram at the start of a stored word. If there is a preceding blank space, a throw-away $monopsi may be declared.

When we input "ЗНАЮ", the initial "З" gets activated not only on words with the same verb-stem, but also on the preposition "ЗА". In "ЗА" the N-I-L (next-in-line) "А" gets activated, but will not match up with the "Н" coming in as part of "ЗНАЮ". When we are trying to match the "Н" with an activated "Н", perhaps we should de-activate any other activated character that is not the "Н", so that the "А" in "ЗА" gets de-activated. (Of course, we could encounter problems when a character occurs twice in a row, but let us deal with that problem later.)

2016 April 06:

Yesterday we finally got the Perl AI to respond properly to a Russian verb flanked by personal pronouns as subject and object, but the solution worked well only upon the very first input. Afterwizards the responses were garbled, as if we had not zeroed out the OutBuffer() quasi-array of sixteen variables holding the Russian-verb stem in a right-justified detente suitable for the changing of inflectional endings. Now in we should try to clear up the problems.

2016 April 07:

Today let us work on AI and let us work on fixing any one of the first bugs to pop up. So we start the AI and we type in the Russian sentence "Я ЗНАЮ ТЕБЯ" ("I know you"). When we press the [Enter] key -- doing it now -- we get "ТЫ ЗНАЕШЬ МЕНЯ" ("You know me") as the grammatically correct output of the AI, but there are two glitches or bugs visible in the display of conceptual memory for the human input. There is no recall-vector "$rv" being displayed for the Russian pronoun "Я" ("I") at the start of the human input, and also for the pronoun "ТЕБЯ" ("you") at the end of the input. This problem is a relatively new bug, because the recall-vectors were being displayed quite accurately up until a few days ago, when we made major changes to the RuAudRecog() Russian auditory recognition module. So now let us pay attention to the current diagnostic messages and let us insert additional diagnostics if we need them. As we inspect the diagnostics, we suddenly remember: the absence of the recall-vectors is not a bug; it is a feature! When the Russian word for "I" comes in, the software converts it to the internal "you" concept and stores a recall-vector of zero ("0") so that "you" will not fetch the stored word for "I". Case closed. Let us run the AI again in search of a real bug.

Since we can find no bugs to work on, let us move on in adding to the functionality of the Perl Strong AI. Let us code the tendency of the AI Mind to activate the self-concept of "I" if no other concept is currently active.

2016 April 08:

Today in we would like to work on re-entry. The Forth AI Minds accomplish re-entry by setting the pov flag to "internal" in Speech and by calling the AudInput module from Speech to send the output of the Mind back into the Mind. In the Perl Ghost AI, we must stop setting the $pov flag to "external" during the AudInput() module, because AudInput may receive input either externally from a human user or internally from the Ghost AI itself during the process of re-entry.

Uh-oh. Houston, we have a problem. The AudInput() module in Perl expects to be receiving input as "STDIN", but the re-entry process does not route itself through "STDIN". Who're you gonna call, Ghost? We may need to set up an actual ReEntry() module. So we set up a ReEntry() module but we still have a problem. The Ghost AI is not interpreting external "YOU" as internal "I". We may need to make sure that the point-of-view $pov flag is set to 2=external by the Sensorium() module. When we do so, the problem goes away for the interpretation of personal pronouns.

2016 April 09:

We want to flesh out the ReEntry() module in today. As we examine the JavaScript AI source code of the English AiMind.html and the Russian Dushka.html, it dawns on us that we can perhaps not avoid using the AudInput() module for both human input and cognitive re-entry. Let us try conditionalizing the use of "STDIN" during AudInput().

We forgot that the $msg string during AudInput() is the whole sentence of input, not each character one-by-one. Accordingly it did not work to send each $k[0] character individually from Speech() back into AudInput().

We will try now to use $idea to hold the AI output for re-entry into the AI Mind. However, let us try calling ReEntry() not from the Speech() module but from somewhere in the EnThink() module, because we want the Speech() module to have finished expressing the $idea.

When we use ReEntry() to send the $idea back into AudInput(), the result seems to be an infinite loop. Let us see if the loop will run its course.

2016 April 10:

In we will first try to get the AI to use parameters in finding the correct form of a be-verb.

Now in we would like to see if we can bring the AI closer to thinking out loud on the computer screen. However, we need to prevent the AI from going immediately into AudInput() mode. Let us try to impose an initial time period while the AI does some thinking first, and then waits for human input during AudInput().

Perhaps we can get the main large AI program to write Perl code to a file and then run that file as an input program, so that the main AI can think merrily along without waiting for human input.

We have been able to get one Perl program to write and run a new Perl program, but the input gets interpreted as an attempt to issue DOS commands. Perhaps we could have the main AI program create and run a smaller program that receives input and writes it to an "input.txt" file with a special alternating identifier, so that the main AI program will be able to keep on thinking while merely checking the input.txt file to see if the special identifier has toggled to its alternate form.

2016 April 11:

By telling the MainLoop in not to call Sensorium() until time $t is greater than 2500, we have made the Strong AI try to think on its own for a while before accepting user input. Then in our memory-display we see that the AI has spoken the word "I" but has only output "ERROR" instead of a verb and instead of a direct object or a predicate nominative. So we may have to start coding a SpreadAct() module to get the activation to spread from the pronoun 701=I to an associated verb.

Now that we have stubbed in the SpreadAct() module, we need to declare variables like $seqpsi to do the work of spreading activation.

2016 April 12:

As we develop the SpreadAct() module in, we should perhaps call the module only from the end of a sentence of thought, as for example, from the direct object of a verb. If the AI thinks a thought in response to user input and the idea of the AI goes by ReEntry() into the AI memory, and the human user then only presses [Enter], SpreadAct() could give rise to another idea as thought by the AI.

2016 April 13:

We would like to start using $verblock and $nounlock to preserve the logical integrity of an idea, but first we must implement the $tkb flag. Although in the old MindForth and the old Wotan we needed to conduct a search to find the tkv (now tkb) value in the InStantiate module, in the new Parser() module of the Ghost Perl AI we simply use the tvb time-of-verb value as the tkb value. (If necessary, we could expand tkb to tkbv and tkbn for both verbs and direct-object nouns.) Then we also use the $tdo time-of-direct-object in the Parser() module to set the direct-object $tkb within the verb flag-panel. With the $tkb set for a subject-noun to find its verb and for a verb to find its direct object, we should be able to work with the verblock and nounlock flags.

2016 April 14:

Earlier today we increased the diagnostic memory display to show both user input and AI output. Now in we have a problem where we enter "You are Andru" but the Ghost AI responds, "I ARE ANDRU." Obviously the EnVerbPhrase() module is not using parameters to fetch the proper form of the verb.

2016 April 15:

As of, the ReEntry() process may not be calling the SpreadAct() module, because the reentrant $idea is not going through the NLP generation process. However, the parts of speech in the $idea are transiting through the OldConcept() module and are indeed being classified as subject and verb and direct object. Therefore, at some point along the way of re-entry, it should be possible to capture and identify the direct-object concept in order to call SpreadAct() and thus keep a chain of thought going.

If we use SpreadAct() to keep a chain of thought going, it may be possible to prevent invoking AudInput() while the chain of thought is proceeding. We might have to inaugurate a $chaincon flag to keep from calling AudInput() until the chain of thought is exhausted and $chaincon goes from one down to zero.

Once the chain-of-thought begins, it seems difficult to stop it. We should perhaps let $chaincon increment itself and serve as a counter, so that we can stop the chain of thought when $chaincon reaches some arbitrary but low value.

2016 April 16:

Yesterday we created the $chaincon flag variable and we posted about it in the comp.lang.forth and newsgroups on Usenet. Although the "chain-of-thought condition" flag was working well to delay the calling of AudInput() while the Ghost AI did some thinking, the actual thinking was of low quality, because the conceptual activation-levels are out of whack. Today in we would like to improve the activation-levels and thus improve the thinking.

2016 April 17:

Our selection of what to code today is based on the principle of most pressing or obvious need. We have recently gotten the SpreadAct() module to take the direct object from the end of a sentence and to try to retrieve knowledge about that former direct object from the knowledge base (KB). However, we saw that the AI was tending to repeat the next idea over and over. Therefore we need to refine and adjust the operation of the SpreadAct mechanism. We mainly need to make sure that neural inhibition will prevent the repetition of the same idea over and over. We also need to embed safeguards so that, if the former direct object does not lead to its role as the subject of a stored idea, some other notion may come to mind in the AI.

In we inserted some neural inhibition code into the EnNounPhrase() module and suddenly the AI was able to follow a chain of thought by converting direct objects into subjects.

2016 April 19:

Although yesterday we implemented the ReJuvenate() module, it did not seem to work perfectly. We may have to remove the legacy $edge flag and use some other way of not moving just part of an idea backwards in memory. We could perhaps set up a buffer area beyond the $vault line and move things willy-nilly into the vicinity of the buffer-line, but subsequent rejuvenations might lose track of the buffer area.

We could try to make sure of moving only whole words and not parts of words in auditory memory, and then we could count on other exclusionary conditions in the AI code to not try to retrieve a partially stored idea.

2016 April 21:

Yesterday we ran the Ghost AI without input to see if any glitches occurred, and we solved two problems that appeared. Later, however, we discovered that the AI could not easily be dislodged from its own internal chain of thought, so we must adjust the various activation-levels.

2016 April 22:

We have a problem because AudRecog() is recognizing the "we" in the word "weird" and assigning a provisional recognition $prc tag for "we" instead of letting "weird" be treated as a previously unknown word in NewConcept(). A similar glitch is occurring for comparable Russian words in RuAudMem(). We need to figure out a way to cancel the $prc tag if the incoming word goes on at length and is not recognized as the longer word that it is. Now we have apparently solved the problem by adding some AudRecog() code that lets a $prc tag be assigned only when the comparand engram character has some activation on it. There may briefly be a $prc on the "we" in "weird" but not as additional characters come in, even if finally the whole word fails to be recognized as a known "old" concept in OldConcept().

2016 April 23:

Two days ago in we started showing three main depictions at the end of each thought in English by the AI: array-contents; Ghost output; and input prompt. Meanwhile we notice that Russian input is not being displayed in the same format. After much troubleshooting, it turns out that RuVerbGen() was not yet sending the inflectional endings into AudInput().

2016 April 24:

In we have a very specific problem. In Russian we enter "Я вижу тебя" for "I see you" and the AI incorrectly responds, "ТЫ ВИДИШЕШЬ МЕНЯ" when the answer should have been "ТЫ ВИДИШЬ МЕНЯ" for "You see me." We need to troubleshoot how the output verb ends up so garbled and mangled. It could be a very simple one-character or one-line fix, or it could be a vexing problem that will take hours to correct. Let us start by calling up the JavaScript Dushka AI and entering the same input in Russian. Immediately we see that the Dushka Russian AI answers correctly, so we know in advance that there is a solution in store for the Ghost Perl AI. Next we will examine the diagnostic messages created while the Ghost AI was thinking up its response.

From the diagnostics we see that the AI recognizes the input verb correctly as concept #1820. The AI should easily find the correct output form in memory. However, when we un-comment-out a pertinent diagnostic message, we see that RuVerbGen() is unnecessarily being called. The result is garbled because obviously the RuVerbGen() module is trying to generate a form like "ЗНАЕШЬ" which means "you know" but is not appropriate for "you see." By the way, we plan to let RuVerbGen() generate three different kinds of present-tense Russian verbs by detecting "А" for the stem of one conjugation and "У" for the stem of another conjugation, while letting most other Russian verbs be presumed to follow a default format. But those are future plans; now back to the present.

By uncommenting another diagnostic message, we see that the RuVerbPhrase() module did indeed find the correct verb-form in auditory memory, but somehow RuVerbGen() was needlessly called. We look to see if the same problem can be tested for "ЗНАТЬ" meaning "to know", but we discover that the MindBoot() sequence does not contain the full present-tense paradigm of the verb, so we can not perform the test.

We then remove an entire block of code and store it elsewhere to see what happens. The problem goes away, and the RuVerbPhrase() module does not needlessly call the RuVerbGen() module. No, we have to put the code back in, because other needed forms are not being created without it.

Finally it turns out that Speech() was not being called for the correct verb-form because a conditional test of $vphraud had been ended with a bracket beyond the call to Speech(), and so Speech() was not being called.

2016 April 25:

Baikonur, we have a problem. The Russian-thinking Ghost AI answers properly when we type in "Я знаю тебя" ("I know you") for the first input. Ghost responds, "ТЫ ЗНАЕШЬ МЕНЯ" for "You know me" in Russian. Then for a second input we type in "Я вижу тебя" for "I see you" and Comrade Ghost only says "ТЫ ЗНАЕШБ" ("You know") which is the wrong verb and which does not include a direct object. It looks as though some variable is holding over either the concept number or the auditory recall-vector for the previous verb.

2016 April 27:

If we enter an English sentence of which the final direct-object noun triggers SpreadAct() and for which there ought to be a verblock somewhere, how is that $verblock supposed to be found? Actually, it is not supposed to be found initially. If the spread-acted concept has enough activation to win the competition for selection in the EnNounPhrase() module, the $verblock should simultaneously be found.

2016 April 28:

The AI is not letting an input Russian sentence ("Я вижу студента" for "I see a student") go through SpreadAct() to retrieve the MindBoot() sentence "СТУДЕНТЫ ЧИТАЮТ КНИГИ" ("Students read books."). After the input, somehow RuVerbPhrase() is starting off with a $motjuste of 1820 for "see" instead of 1825 for "read".

2016 April 30:

In we have a strange problem with what happens when the SpreadAct() module is called. If we enter the English sentence "You see me", the AI sends the understood 707=YOU concept into SpreadAct() properly but the AI responds, "YOU ERROR MAGIC" instead of "YOU ARE MAGIC". When we try a similar input in Russian and we type in "Ты видишь меня" for "You see me", the AI responds "Я ВИЖУ ВИЖУ" ("I see see"). The Russian modules are not shifting to the "you" concept and they are not finding the correct direct object. But let us deal first with the problem in English. After some troubleshooting, in the EnVerbPhrase() module we comment out one line of code that was setting the $vphraud variable to zero, and the AI began to respond properly, "YOU ARE MAGIC". However, we may yet see that it is better for the AI to seek the correct verb-form than merely to accept the $vphraud value.

2016 May 01:

In we have a problem after two simple inputs. In English we enter "You see me" and the AI correctly sends the direct object into SpreadAct() and the Ghost AI responds, "YOU ARE MAGIC", because that thought is in the English MindBoot() sequence. However, as our second input we enter "ТЫ ВИДИШЬ МЕНЯ" ("You see me") in Russian and the AI incorrectly responds, "Я SEE YOU" in a mixture of Russian and English. Probably some variables are not properly being reset to zero between modules, but we need to troubleshoot and investigate.

Now with we are wondering why the input of "ТЫ ВИДИШЬ МЕНЯ" ("You see me") in Russian may not lead to the sending of the direct object into SpreadAct(). However, apparently the direct object is indeed being sent, but it does not receive enough activation to become the subject of a new response. Let us try to adjust the activation levels.

2016 May 21:

Today in we are thinking of making the Ghost Perl AI alternate between English and Russian as the default language of thought with each release of the AI onto the Web. However, we must inspect the MindBoot() sequence to see if there are enough Russian ideas present for the AI to start out thinking in Russian. Hmm, all we see at the end of MindBoot() is the Russian for "Students read books," an idea intended for a demonstration of the InFerence() module in Russian. Let us look at the old Dushka code in JavaScript to see if there are more ideas there. Yes, in the Dushka Russian AI code the bootstrap includes Russian for "You think something" and "People read books" and "Robots do work" and "I see nothing" and "I understand you" and "God knows everything." But before we add those ideas to the Perl MindBoot(), let us see what already happens when we change the default language setting from English to Russian.

The Russian functionality in the Ghost AI is not yet good enough for alternating between English and Russian as the default language, so we should just do some ordinary troubleshooting. Here is a problem. When we start up and we type in, "Я ВИЖУ ТЕБЯ" for "I see you," the Ghost AI erroneously responds, "Я ВИЖУ МЕНЯ" or "I see me" translated into English. In RuVerbPhrase() we insert

if ($verblock == 0) { return } # 2016may21; TEST
and it superficially solves the problem, because the AI outputs only the word "Я" for "I" and returns to the calling module without selecting a verb.

2016 May 22:

Today in we will try to deglobalize at least one variable, such as $actbase which is used in the English and Russian AudRecog modules. We are curious to see if it can be declared as a local "my" variable in both of the modules. When we comment-out the variable prior to de-globalizing it and we try to run the AI, we get ten angry lines of complaint and "Execution of aborted due to compilation errors." Well, excuuse me. Now let us see what happens when we use "my" in the English AudRecog module. We do so, and we still get five petulant lines of complaint and the same termination message. So let us use "my" also in the RuAudRecog() module. Hey, no more complaints from Strawberry Perl5. We were able to deglobalize $actbase into a local variable used in two different mind-modules.

2016 May 23:

As we look for another variable to de-globalize, $audbase comes under consideration, but we see that it carries information from mind-module to mind-module, so it should not be de-globalized.

The buffer-increment variable $binc is a better candidate for de-globalizing, since it plays a role only within the RuVerbGen() module. Since we are de-globalizing the variable, we now take some time to expand its explanation in the web-page of the Table of Variables.

2016 May 25:

In, we enter "Я вижу студента" but the AI answers "БОГ ЗНАЕТ ВСЁ", which is the same as what we added to the MindBoot() in the version. Previously, the AI would respond with "СТУДЕНТЫ ЧИТАЮТ КНИГИ", so something has gone wrong. It turns out that in the MindBoot() the word "БОГ" for "God" mistakenly had been assigned the concept number for "student", so the AI was erroneously switching from a discussion of "student" to a discussion of "God".

2016 May 27:

In we are trying to deglobalize the $prevtag variable, which is needed only in the InStantiate() mind-module. After a noun or a verb has been instantiated, $prevtag holds its concept-number ready to be inserted as a $pre tag, if needed, during the instantiation of a succeeding concept. Thus a verb can have a $pre back to its subject, and a direct object can have a $pre back to a verb.

2016 JUNE 17:

We would like to introduce auxiliary verbs now into the Ghost Perl AI, so that we may use the adverb NOT for the negation of sentences of thought. We need negation for the proper functioning of the InFerence mind-module, so that a refuted inference may be couched in negational terms, as in "God does not play dice with the universe."

When we rename as and we type in, "You know me," eventually the AI says, "I KNOW YOU." However, when we enter, "You do not know me," the AI soon says, "I DO YOU," because it has not treated the input of "do" as an auxiliary verb. It has also not dealt with the negational "not" adverb.

Based on what we see in MindForth and in the JavaScript AiMind.html, we need to introduce into the InStantiate() module some code that checks for "do" or "does" as an incoming auxiliary verb. Let us try inserting such code in the Parser() module. When we do so and we enter "You do not know me," the AGI no longer says "I DO YOU" and it eventually says, "I KNOW YOU," which is encouraging, because the negation of "not" has not yet been implemented.

Next we use $prejux and $jux to get the AI to insert a $jux value of "250" for the adverb "not" in the negation of a verb. Next we need to get the AI to use "not" in generating a negational sentence. We do so by roughing in some search code for 250=NOT in the EnVerbPhrase module and some seach code for 830=DO in the new EnAuxVerb() module. The AI starts to respond to negational input with negational output.

2016 JUNE 18:

There is a problem with the Ghost Perl AGI because we want the AI Mind to remember all its knowledge and not get stuck in a rut with a chain of thought due to faulty activation levels.

Normally the AGI receives a sentence of input and as a result the concepts of the input are highly activated. Then the thinking modules generate or retrieve a thought about the input. We would like to make the entry of a single noun, followed by [RETURN], not just activate the single noun but also feed it into the SpreadAct() module, for several reasons. If only the noun is active, the thinking modules are not able to generate or retrieve a thought. If we feed the concept-number of the noun into SpreadAct(), then there is a better chance of getting the output of any knowledge remembered about the noun. It is also helpful to be able to query the AGI just by entering a single noun.

Perhaps we could have an insurance policy of both activating input sentences and sending input nouns into SpreadAct(). But perhaps we should adopt an even more drastic policy, namely, of not counting on the input of sentences to generate a thought by means of activation-upon-input, but rather of generating a thought by having both the input-subject and the input-object sent into SpreadAct().

2016 JUNE 19:

The Perl AGI should output a thought stemming from only a limited set of sources: an idea stored in the knowledge base; a new idea generated by logical inference; or a sentence generated as the expression of information arriving from the senses, as for example when the AGI is describing what it sees in a visual scene. There should be no random and potentially erroneous associations from random subject to random verb and to random object.

As we start coding and we simply press [Enter] with no input to see what output results, the EnNounPhrase() module defaults to 701=I as a subject. Immediately a t=753 $verblock is found which locks the AI into an output of "I HELP KIDS" from the innate knowledge base. Currently the SpreadAct() module is being called from EnNounPhrase() when the AI outputs the 528=KIDS direct object, but perhaps the call should wait until after ReEntry() inserts the idea into the moving front of the knowledge base.

The output of a thought, even from memory, is the result of spreading activation and should not lead to more spreading activation until the same thought becomes a form of input during ReEntry(). There must be some way to delay the calling of SpreadAct() until a new-line or [Enter] is registered. In OldConcept() we could set the $actpsi with the $oldpsi value, but not call SpreadAct() until the end of the input or re-entry.

We have created a new $quapsi variable "by which" the final noun ($psi) from InStantiate() can go into SpreadAct() from the ReEntry() module and possibly spread activation to pertinent knowledge in the knowledge base of the AI memory.

2016 JUNE 21: Creating a diagnostic minddata.txt file

Now in we would like to implement some code that creates a minddata.txt file for diagnostic purposes when we press "Q" to "Quit" the AI. First, we get the AI to open an empty "minddata.txt" file.

Now we have developed the following block of code:

  if ($reversed =~ /[Q]/) {  # 2016jun21: enlarging quit-sequence
    my $fh = new IO::File; # 2016jun21: Perl_Black_Book p. 561
    print "Opening diagnostic minddata.txt file...\n";  # 2016jun21
    $fh->open(">minddata.txt") or die "Can't open: $!\n"; #2016jun21
    $tai = $vault;  # 2016jun21: skip the MindBoot() sequence.
    do {  # 2016jun21: make a loop
      print "t=$tai. psi=$psy[$tai], ";  # 2016jun21: show @psy concept array 
      print " aud= $ear[$tai], \n";     # 2016jun21: show @ear auditory array
      $fh->printf ("t=$tai. psi=$psy[$tai], aud= $ear[$tai], \n");  # 2016jun21: PBB p. 535
      $tai++;  # 2016jun21: increment $tai up until current time $t.
    } while ($tai < $t);  # 2016jun21: show @psi and @ear array at recent time-points  
    print "Closing minddata.txt file...\n";  # 2016jun21
    $fh->close;  # 2016jun21: Perl_Black_Book p. 561
    die "TERMINATE: Q means quit. \n";  # 2016jun21
  }  # 2016jun21: end of quit-sequence
We will comment out the above block of code which serves to create a minddata.txt file with the contents of memory beyond the "vault" area of the MindBoot(). The minddata.txt file will be useful for diagnostic purposes, but we comment it out so as not to create files on the computers of Netizens who download and run the perlmind.txt AI program. AI coders may re-activate the diagnostic file code for such purposes as seeing how activated each concept is and to check on how well the ReJuvenate() module is functioning. The ability to create the minddata.txt file hints at such future possibilities as saving and re-loading the state of the AI Mind, and making a remote copy not only of the Perl AI software but also of the contents of the AI Perlmind prior to the making of the "clone" of the AI.

2016 JUNE 23:

With our diagnostic log-file, we can check to see if the process of neuronal inhibition is setting too negative a level of activation on the concepts in a sentence of thought.

NOTE: The minddata log-file should be used to test how long an idea remains "submerged" in deep inhibition, until it can be brought back up into conscious thought with the SpreadAct() module. For instance, consider inputs like:

Human: You know boys.
(then, somewhat later:)
Human: Boys play games.
(then, a LOT later:)
An input ending in "boys" ought to be able to go through SpreadAct() and re-activate the idea, "BOYS PLAY GAMES". If not, the minddata.txt file should give some indication of why not.

2016 JUNE 24: Orchestrating Activation-Levels for Variety of Thought

In NounPhrase() we will try to let lack of activation default only to an instance of 701=I that has a positive verblock in the $k[7] position of a row in the @psy conceptual array.

We have concepts being inhibited in NounPhrase() when they are subjects and then elevated to high activation by SpreadAct() when they are direct objects. In the case of the 701=I default-to-ego subject, we would like to see all self-knowledge gradually get a chance to ascend in activation to the summit of consciousness. Therefore we may need to practice deep inhibition of a used subject and serious PsiDecay() of all active concepts, and we may have to revise SpreadAct() so that it still imparts activation to a recent direct object, but only to the already most activated engram of the erstwhile direct object. Or should we have SpreadAct() impart a measure of positive activation to all instances of the formerly direct-object concept, and let the motjuste-competition sort out the most active instance?

It looks like the default 701=I pronoun is not being inhibited when it is selected as a default subject in EnNounPhrase(). Thus the same ideas about ego in the MindBoot() vault keep getting selected over and over again, for lack of inhibition.

We have made the 701=I default ego-concept be inhibited, and we see now that achieves a better orchestration of the conceptual activation levels. For instance, we entered "You know boys" and "Boys play games." Soon and often the AI made the output "I KNOW BOYS". After neuronal inhibition had dissipated sufficiently, eventually the AI made first the output "I KNOW BOYS" and then "I AM ANDRU" and "BOYS PLAY GAMES".

2016 JUNE 26: Correcting misallocation of $tdo nounlock in Parser()

As we continue to orchestrate and smooth out the conceptual activation-levels in the Artificial Mind, we may need to change what the InStantiate() module does to a message of verbal input. We should perhaps impose activation upon the previous nodes of a noun-concept and not upon the current node being instantiated, for several reasons. It does not make sense to store the current input with full activation, because then the AGI Mind would simply have a tendency to repeat the idea entering the Mind as input. It makes more sense to activate the past nodes of a noun-concept, so that the AGI can generate a remembered idea about the input. Since we think of the concept as dwelling up and down the length of a long neuronal fiber, it makes sense to activate the whole chain of conceptual nodes on the quasi-fiber. We thus also bypass the SpreadAct() mind-module, because we take the activation from the input message directly to the stored ideas that happen to contain the concept being activated.

If we attach zero activation to the current node of a concept being instantiated, we make it easier to process input messages containing the question-words "who" or "what" and so forth. In the older AI Minds such as MindForth and the JavaScript AiMind.html, we used special code to catch the input of such query-words and to assign zero activation to them. If zero activation is the default condition for fresh input, then we may not need special handling for the query-words.

Now we seem to have discovered why the AI was making illogical "I AM" statements. The wrong $tkb is being stored with the instance of the 800=BE verb, so that the retrieval of the idea ends with a direct object immediately prior to the memory being retrieved, rather than with a correct predicate nominative. Then we discover that the problem is more pervasive. Troubleshooting leads to a correction of the setting of the $tdo (time-of-direct-object) variable as a nounlock in the Parser() module. The improvement is so salubrious that we prepare to upload the debugged code to the Web. In the current code, apparently inhibition is so deep that factual knowledge does not re-emerge until long after it has been input to the AI. We may soon adjust the depth of inhibition so that the AI quasi-consciousness has more immediate access to all the facts in its knowledge base. The being released has the memetic advantage of not yet displaying faulty thinking.

2016 JUNE 27:

The Ghost Perl AGI project is deeply involved in mind-design as we tinker with the most fundamental movements on the MindGrid. Before we change the methods of neuronal inhibition, let us simply lessen the depth of inhibition and see what happens when inhibition is more shallow. The results are very strange. In the InStantiate mind-module under positive $inhibcon, when we change the deep "-48" inhibition to a shallow "-32", the input entry of "Boys play games" no longer gets retrieved before the t=4000 time-point just prior to the calling of the ReJuvenate() module. So we go in the other direction and we deepen the "-48" inhibition to an even deeper "-56" inhibition. Now suddenly the "BOYS PLAY GAMES" factoid gets recalled at t=3489, with a positive activation-level of "888" on the "BOYS" concept. We also notice that the Perl AGI had tried to start a sentence with the 501=I concept as the subject, but instead the idea of "BOYS PLAY GAMES" was retrieved. The 501=I ego-concept must have been too inhibited, so it yielded to the "BOYS" concept during the selection of a subject. A little later, at t=3537 the AGI retrieves "GAMES HELP BOYS". But we do not want the AGI to rely on profoundly deep levels of ReEntry() inhibition. Let us go back to "-48" inhibition and try something else.

In the English EnNounPhrase() mind-module, let us change the inhibition of a currently selected subject from a deep level of "-72" to a more shallow "-16". The upshot is that we get all the way to "t=4000" and still there is no retrieval of "BOYS PLAY GAMES". Let us go the other way and deepen the activation from "-72" down to "-80". Now we get "BOYS PLAY GAMES" at "t=3230" and the "BOYS" concept has an activation of "890" when it is selected from its "t=2457" input engram. Then we get "GAMES HELP BOYS" at "t=3263" and "GAMES" has an activation of "118" at its "t=2487" engram as an input subject. So when we inhibit the selected subject more profoundly, we obtain an earlier retrieval of entered facts of knowledge. But we would rather inhibit the ReEntry() of ideas and not the generation of ideas.

When we comment out the lines of code for the immediate inhibition of selected subjects, the AGI goes into a repetitious output of "I AM ANDRU" with a "t=533" verblock. However, during ReEntry() we have been inhibiting only nouns and not pronouns like 501=I. The AGI also gets into a repetitious output of "KIDS MAKE ROBOTS" with an even increasing activation on the "KIDS" concept at "t=583" in the MindBoot() sequence, so that the AGI can not let go of "KIDS" as a subject. In the minddata.txt file, we notice that "KIDS" is maintaining a consistently high activation, while "MAKE" and "ROBOTS" are being inhibited. In the InStantiate() module, let us try commenting out the call to SpreadAct() for a concept being instantiated. We also comment out a call from OldConcept() to SpreadAct(), because the AGI was locking on to a repetition of "KIDS MAKE ROBOTS" and then "BOYS PLAY GAMES" with higher and higher activations on an early engram of the "BOYS" concept.

When we remove all English (non-Russian) calls to SpreadAct() except from ReEntry(), we observe an interesting phenomenon. The AGI repeats "BOYS PLAY GAMES" over and over, starting with an activation of about "50" on "BOYS" until the level reaches "22" on "BOYS", after which the AGI says once, "I KNOW BOYS" with an activation of "18" on the 501=I concept. Then the AGI goes back to saying "BOYS PLAY GAMES" about ten times, while the activation on "BOYS" gradually drops again and the AGI says "I KNOW BOYS" another single time. Apparently "I KNOW BOYS" is resulting from the selection of 501=I as the default subject when no other concept is highly activated.

We are making progress here, because the minddata.txt file shows that ReEntry() is using $inhibcon to make InStantiate() set an arbitrary "-48" inhibition upon all the concepts of an idea being re-entered into the AGI Mind and therefore passing through the InStantiate() mind-module. We see in the minddata.txt log-file that PsiDecay() is decreasing the inhibition of the older engrams of the re-entered ideas. The older engrams of "BOYS" in "BOYS PLAY GAMES" are showing a positive activation of "39" at many time-points, apparently from when SpreadAct() passes activation to the direct object in "I KNOW BOYS".

2016 JUNE 28:

Today we have started coding as initially a copy of and we are abandoning the version of the Perl AGI from yesterday because we were not able to achieve a stable and properly functional version of the program after seven hours of intense coding. Nevertheless we have ideas which we hope to implement today for a worthwhile release of the Ghost Perl AGI code.

In we are creating the minddata.txt files with the entire contents of the @psy array from the $midway starting point, because we must work on the neuronal inhibition of engrams contained in the MindBoot() sequence.

We notice that the AGI might retrieve and output "I KNOW BOYS", but apparently subject-selection does not pass to the "BOYS" concept with an activation-level of 196 at t=2457, because the "ROBOTS" concept at t=2586 has an even higher "224" activation. The AGI has to say "I KNOW BOYS" several times to build up enough activation on the "BOYS" concept for it to be selected as a subject of "BOYS PLAY GAMES".

We also notice that in the English EnNounPhrase() module, there is sometimes a subject-psi that already has a "verblock" going into the mind-module. Perhaps we should zero out $verblock at the start of EnNounPhrase() and see what happens. We also zero out $subjpsi, and we stop getting pre-ordained subjects and verblocks.

Then we encounter a problem of a binary rut of repetition of "BOYS PLAY GAMES" and "GAMES HELP BOYS" over and over again. We propose to eliminate such a rut by imposing such deep inhibition on any selected idea-engram that only one imposition of activation from SpreadAct() will not be enough to immediately overcome the deep inhibition. Let us try having SpreadAct() impose 32 points of extra activation, and having EnNounPhrase() inhibit selected subjects down to "-90" points. We do so, and we see "BOYS PLAY GAMES" emerge when "BOYS" has a built-up activation of "154". We see "GAMES HELP BOYS" emerge when "GAMES" has an activation of "60". Then "BOYS PLAY GAMES" emerges again with an activation of "32" on "BOYS". Then we get "GAMES HELP BOYS" with an activation of "32" on "GAMES". We are back in the binary rut. Perhaps the InStantiate() module is imposing activation on "BOYS" and on "GAMES".

2016 JUNE 30:

We would like now to input three sentences starting with "I" and see if the Ghost Perl AGI can separately retrieve all three ideas from memory after expressing the idea "I know you." "You" as the direct object should cause SpreadAct() to re-activate the stored ideas. First we will tell the AGI "You know me" so that it will later say, "I know you." Let us also input ideas like "I have kids" and "I know robots" and "I see women".

In the InStantiate() module, we are having to let either or a noun or a pronoun provide a value to the $quapsi variable, so that ReEntry() can call SpreadAct() for a pronoun as direct object.

2016 JULY 01: Debugging the AudRecog() Mind-Module

The Ghost Perl AGI is misrecognizing the word "weird" as if it were "we", although it recognizes "boys" as the "589=boy" concept. In AudRecog() we need somehow to limit how long a $prc (provisional recognition) remains valid as a pattern-match on a word being entered into the @ear auditory memory array. We have used a new variable $prclen to determine when an input word has gone more than two characters in length beyond an erroneous provisional recognition. We then abandon the $prc provisional recognition.

The AGI now creates a remarkably stable MindGrid with old and new concepts in activational equilibrium. The human user may input simple Subject-Verb-Object (SVO) facts into the AGI and see the AGI retrieve the facts from its knowledge base (KB) when prompted by new inputs making mention of concepts related to the previous knowledge. If the Ghost Perl AI can now perform these intellectual operations with a handful of concepts, it can arguably perform the same mental operations with a billion concepts.

2016 JULY 03:

Today in we want to make the process of calling the SpreadAct() module more straightforward. However, we still need the SpreadAct() module for when we will use it to re-activate not only a verb-node, but also each $pre and $seq of the verb.

If we temporarily prevent the setting of $quapsi in the InStantiate() mind-module and we enter "I see kids", the AGI activates the 528=KIDS concept and outputs "KIDS MAKE ROBOTS". Why does not "ROBOTS" activate previous engrams of itself to cause the output of "ROBOTS NEED ME"? It is because the $pov is not external during ReEntry(). Let us temporarily make it not matter what the $pov is. Still we do not get "ROBOTS NEED ME", until we discover and fix a bug in the EnNounPhrase() mind-module. During selection of the $motjuste for a subject, as each candidate was considered, the comparand activation-level was not being re-set to the activation of each successive noun under consideration, and so the noun with the highest activation was not winning the competition for selection as a subject. When we inserted "$act = $k[1];" as a line of code to adjust the metric for comparison, "ROBOTS" won selection and we got "ROBOTS NEED ME" as output. Unfortunately, that comparison-bug had apparently been lurking there for about three months.

2016 JULY 04:

Somehow the ideas passing through ReEntry() and back into the Artificial Mind are building up fresh activation too quickly and thus being selected again as the subject of a thought too soon. Let us try to ameliorate the situation, first by lowering the activation imposed by the InStantiate() mind-module from "48" down to "32". When we do so, at first the problem seems to be corrected, but soon the AGI gets into a rut of saying "ROBOTS NEED ME" over and over again, while hundreds of points of activation build up on old and new engrams of the "ROBOTS" concept. It goes against our theory of mind-design to have no upwards limit on how highly a concept can be hyper-activated, so therefore in the InStantiate() mind-module let us change from imposing additional activation to imposing an absolute level of activation. No, that method still puts the AGI into a repetitive rut. Let us try again with additive activation, but with a much lower increment than "32".

In the InStantiate mind-module, we could try putting an upper limit on the possible activation of an old concept being instantiated. In that way, a host of old concepts could be competing for selection in the generation of an output, while the concepts at maximum activation would gradually lose their activation through PsiDecay(). We do so, but the attention of the Artificial Mind does not shift to the 701=I ego-concept, so we arrange for the impostion of activation not only upon nouns going through InStantiate(), but also upon pronouns. Then at the end of the word-entry-loop in the AudInput() module, we insert a call to PsiDecay() so as to slightly reduce the possibly maximum activation of other engrams just before the instantiation of any reentrant or incoming word. The Artificial Mind then shows a variety in its meandering chains of thought.

2016 JULY 06: Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost, we have now commented out some code in the InStantiate mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).
At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,
at t=2426, 707=YOU has a negative "-46" activation.
At t=2430, 820=SEE has a negative "-46" activation.
At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46" points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of thought of the AGI.

2017-03-14: Updating the Ghost Perl AI in conformance with MindForth AI.

Today we return to Perl AI coding after updating the MindForth code in July and August of 2016. In Forth we re-organized the calling of the subordinate mind-modules beneath the MainLoop module so as no longer to call the Think module directly, but rather to call the FreeWill module first so that eventually the FreeWill or Volition module will call Emotion and Think and Motorium.

We have discovered, however, that the MindForth code properly handles input which encounters a bug in the Perl code, so we must first debug the Perl code. When we enter, "you see dogs", MindForth properly answers "I SEE NOTHING", which is the default output for anything involving VisRecog since we have no robot camera eye attached to the Mind program. The old Perl Mind, however, incorrectly recognizes the input of "DOGS" as if it were a form of the #830 "DO" verb, and so we must correct the Perl code by making it as good as the Forth code. So we take the 335,790 bytes of from from 2016-08-07 and we rename it as for fresh coding.

We start debugging the Perl AudRecog module by inserting a diagnostic message to reveal the "$audpsi" value at the end of AudRecog. We learn that "DOGS" is misrecognized as "DO" when the input length reaches two characters. We know that MindForth does not misrecognize "DOGS", so we must determine where the Perl AudRecog algorithm diverges from the Forth algorithm. We are fortunate to be coding the AI in both Forth and Perl, so that in Perl we may implement what already works in Forth.

In Perl we try commenting out some AudRecog code that checks for a $monopsi. The AI still misrecognizes "DOGS" as the verb "DO". Next we try commenting out some Perl code that declares a $psibase when incoming word-length is only two. The AI still misrecognizes. Next we try commenting out a declaration of $subpsi. We still get misrecognition. We try commenting out another $psibase. Still misrecognition. We even try commenting out a major $audrec declaration, and we still get misrecognition. When we try commenting out a $prc declaration, AudRecog stops recognizing the verb "SEE". Then from MindForth we bring in a provisional $audrec, but the verb "SEE" is not being recognized.

Although in the MS-DOS CLI prompt we can evidently not run MindForth and the Perlmind simultanously, today we learn that we can run MindForth and leave the Win32Forth window open, then go back to running the Perl AI. Thus we can compare the diagnostic messages in both Forth and Perl so as to further debug the Perl AI. We notice that the Forth AudMem module sends a diagnostic message even for the blank space ASCII 32 even after "SEE", which the Perl AI does not do.

2017-03-15: Porting AudRecog and AudMem from Forth into Perl

We start today by taking the 336,435 bytes of from 2017-03-14 and renaming it as in a text editor. Then in the Windows XP MS-DOS prompt we run the agi00045.F MindForth program of 166,584 bytes from 2016-09-18 in order to see a Win32Forth window with diagnostic messages and a display of "you see dogs" as input and "I SEE NOTHING" as a default output. From a NeoCities upload directory we put the agi00045.F source code up on the screen in a text editor so that we may use the Forth code to guide us in debugging the Perl Strong AI code.

Although in our previous PMPJ entry from yesterday we recorded our steps in trying to get the Perl AudRecog mind-module to work as flawlessly as the Forth AudRecog, today we will abandon the old Perl AudRecog by changing its name and we will create a new Perl AudRecog from scratch just as we did with the Forth AudRecog in 2016 when we were unable to tweak the old Forth AudRecog into a properly working version. So we stub in a new Perl AudRecog() and we comment out the old version by dint of renaming it "OldAudRecog()". Then we run "perl" and the AI still runs but it treats every word of both input and output as a new concept, because the new AudRecog is not yet recognizing any English words.

Next we start porting the actual Forth AudRecog into Perl, but we must hit three of our Perl reference books to learn how to translate the Forth code testing ASCII values into Perl. We learn about the Perl "chr" function which lets us test input characters as if they were ASCII values such as CR-13 or SPACE-32.

Now we have faithfully ported the MindForth AudRecog into Perl, but words longer than one character are not being recognized. Let us comment out AudMem() by naming it OldAudMem() and let us start a new AudMem() from scratch as a port from MindForth.

We port the AudMem code from Forth into Perl, but we may not be getting the storage of SPACE or CR carriage-return.

2017-03-16: Uploading Ghost Perl Webserver Strong AI

Now into our third day in search of stable Perlmind code, we take the 344,365 bytes of from 2017-03-15 and we save a new file as the AI. We will try to track passage of characters from AudInput to AudMem to AudRec.

Through diagnostic messages in AudRecog, we discovered that a line of code meant to "disallow audrec until last letter of word" was zeroing out $audrec before the transfer from the end of AudRecog to AudMem.

In a departure from MindForth, we are having the Perl AudRecog mind-module fetch only the most recent recognition of a word. In keeping with MindForth, we implement the auditory storing of a $nxt new concept in the AudInput module, where we also increment the value of $nxt instead of in the NewConcept module.

2017-03-24: Ghost Perl AI uses the AudListen() mind-module to detect keyboard input.

Yesterday we may have finally learned how to let the Ghost Perl AI think indefinitely without stopping to wait for a human user to press "Enter" after typing a message to the AI Mind. We want the Perlmind only to pause periodically in case the human attendant wishes to communicate with the AI. Even if a human types a message and fails to press the Enter-key, we want the Perl AI to register a CR (carriage-return) by default and to follow chains of thought internally, with or without outside influence from a human user.

Accordingly today we create the AudListen() module in between the auditory memory modules and the AudInput() module. We move the new input code from AudInput() into AudListen(), but the code does not accept any input, so we remove the current code and store it in an archival test-file. Then we insert some obsolete but working code into AudListen(). We start getting primitive input like we did yesterday in the program. Then we start moving in required functionality from the MindForth AI, such as the ability to press the "Escape" key to stop the program.

Eventually we obtain the proper recognition and storage of input words in auditory memory, but the AI is not switching over to thinking. Instead, it is trying to process more input. Probably no escape is being made from the AudInput() loop that calls the AudListen() module. We implement an escape from the AudInput() module.

The program is now able take in a sentence of input and generate a sentence of output, so we will upload it to the Web. We still need to port from MindForth the code that only pauses to accept human input and then goes back to the thinking of the AI.

2017-03-25: pauses for human input and then continues thinking.

There is a chance that we will attain the Technological Singularity today in our coding of the Ghost Perl Webserver Strong AI, but then we will have to figure out how to blame somebody else for it. Meanwhile we start with a mundane problem. Persons who download Forth and run MindForth see a quivering prompt that invites input from the human user. More importantly, the dynamic, quivering prompt conveys the sense that something is alive and sentient in the AI Forthmind. We need the same user experience in the Ghost Perl AI.

The jittery prompt is achieved in the MindForth AudListen module by having it issue an ASCII SPACE-32 and BACK-SPACE-8 over and over again. When we try to achieve a similar human-computer-interface (HCI) in the Perlmind, it is confusing at first because we seem to be altering multiple lines of the screen simultaneously. Actually we are seeing the AudListen loop re-drawing the screen instantaneously.

Let us shift our attention to the problem of how to insert a default CR carriage-return if there is no human input from AudListen.

From page 354 of the Perl Black Book we have just learned how to use "ord" to deal with ASCII values as we are so accustomed to do in Forth.

Since we are finding it difficult to detect a "CR" carriage-return in AudListen with ReadKey, we may just let both the AudInput loop and the AudListen loop run their course, with a really long AudInput loop as a way of presenting the human user with an apparent pause in the thinking of the AI.

Perl Mind Programming Journal

[2017-03-26] The main problem with the Perlmind is that it does not supply an automatic carriage-return CR-13 if the human user neglects to press the Enter-key. This defect prevents the AI from going back to its own chains of thought when a human user has begun but not completed a message of input.

[2017-03-26] We may be able to fix the problem by supplying a CR-13 carriage-return when the input loop of the AudInput module is making its last loop. We try it, and it seems to work, but we find that ReEntry() does not work after an incomplete human input is supplied with a CR-13 in the AudInput module. We deploy a diagnostic message or two and we learn that the $len variable is not at zero when ReEntry() is called, thus perhaps interfering with the proper function of AudInput. Let us try setting the $len variable to zero at the start of the ReEntry module.

2017-03-27: Ghost Strong AI enters each input character into memory.

In the Strong AI it is time to change from the re-entry of complete sentences back into the Mind to a more immediate re-entry of each phoneme (character) back into the Mind. In AudInput(), when we start sending each input character directly into AudMem(), nothing gets recognized, perhaps because we need to change the characters to uppercase. We also start incrementing time "t" before AudInput() calls AudMem(), and we start getting auditory recognition of an input word. When we preserve the inner loop of AudInput but we comment out the outer loop for whole words, we start getting a display of the storage of input in memory.

Next we need to implement the switch-over from storing input to generating output. It appears that we are not getting memory-storage of output because time "t" is not being incremented. We also need to turn a nested AudInput() loop into just a one-time sequence. Then in Speech() we set $pho before we call AudInput() for the reentry of Speech() output. Suddenly we begin seeing both the input and the output as stored in conceptual and auditory memory. We tweak a little and we upload it.

2017-03-29: Improving the Ghost Perl AI Human-Computer Interface

The Ghost Perl AI has recently become able to pause its thinking long enough to accept keyboard input from a human user, and to stop waiting either when the user presses the Enter-key, or when there is no input at all, or when the user fails to enter the carriage-return. Now we need to discontinue the pause more quickly when there is no activity from the keyboard, so we will try creating a $gapcon variable to be incremented with each loop expecting but not receiving an input character, and to be reset to zero when there is indeed an input character. It works.

A minor problem in is that the MainLoop is displaying the contents of the @psy and @aud memory arrays without a gap-line between input and output. The problem seems to lie with the setting of the time $t value. No, the solution was to set the $krt value in the Sensorium() module after the call to AudInput(), so that the MainLoop can separately display memory data before input and then after input, separated by a blank line.

Next in we tackle the problem where the input diagnostic display was no longer showing each input character prominently left-justified down the edge of the MS-DOS window. We simply moved some old code from an obsolete area up into the currently operative AudInput code. Doing so not only gave us the left-justified display, but we also saw immediately that the $len value is not being reset to zero after each word of input, which prevents words beyond the very first word from being recognzed. We track down and fix the $len problem.

2017-03-30: Encouraging AI immortality by reminding users how long AI has been alive.

[2017-03-30] As we code the Perlmind running in Strawberry Perl 5, today we insert code to have the AI announce when it was born, so as to encourage AI enthusiasts to see how long they can keep the Ghost Perl AI alive and running.

[2017-03-30] Now we are trying to clean up the code. In MindForth, the AudInput module handles both normal input from human users and the reentry of output from the speech module. During human input, MindForth AudInput calls the AudListen module. Otherwise, AudInput handles internal reentry.

Improving the storage of words in @ear auditory memory.

[2017-03-31] In we are trying to fix a problem where the display of the AudInput pause-counter is not showing up when the AI Mind is thinking on its own. First, though, we analyze everything that is happening in the AudMem() module. In one instance, after the AI recalls the idea "You are magic", AudMem at first stores the "Y" in "you" and then writes over it with the storage of a blank character. In fact, AudMem is failing to store the first character in each word of an output idea. When we remove from AudInput() an obsolete duplicate call to AudMem(), the AI starts storing the complete word of each remembered idea, but the proper $audpsi tags are not being assigned in the @aud auditory memory array.

Restoring the ability of Ghost Perl AI to recognize words.

[2017-04-01] In we need to ferret out deeply hidden problems, so we have uncommented several diagnostic messages in the AudRecog module. We first learn that the first character of a reentrant word is falsely being declared to have a zero $len for word-length. At the same time, an ASCII CR-13 is being declared inside each AudInput loop.

[2017-04-01] Now we learn that $len is somehow being doubly incremented. We need to find $len++ somewhere and comment it out. We did so in the lower area of AudInput() and then the diagnostic messages no longer showed double lengthening, but still the reentrant words are not being recognized. Apparently AudMem() is not sending a blank space into AudRecog() to announce the end of a word. Apparently it is not the job of AudMem() to generate the blank space, but merely to pass it along into AudRecog(). Perusal of the agi00037.F MindForth code reveals to us that it is the job of the Speech() module to send one last space into AudInput. The generation modules do not attach a SPACE-32 to a word, but rather each word in the @ear auditory memory is followed by a SPACE-32 in storage. The Speech module finds the space character after each word and sends it along into the AudInput module. Somewhere we need to increment $len by one when the post-word SPACE-32 goes from AudMem() into AudRecog().

[2017-04-01] The AI suddenly started recognizing words when we commented out several unwarranted calls to the AudDamp() module, which must have been interfering in auditory recognitions.

2017-04-02: Perl Strong AI pauses briefly for human input.

[2017-04-02] Today in the Perl AI we want to solve the problem of getting the AI to pause reliably and wait for human user input. The code for a pause-loop is already in the free AI source code, but the program keeps slipping out of the receptive point-of-view ("POV") status. Some diagnostic messages confirm our sneaky suspicion that maybe program-flow leaves the main AudInput loop without setting the loop-counter back to zero.

Preventing AudInput from causing unwarranted conceptual storage.

[2017-04-05] Coding AI today, we need to differentiate among Normal; Transcript; Tutorial; and Diagnostic modes for the human-computer interaction (HCI). In the AudRecog module, we insert a test for the $fyi variable to hold a value of, say, "4" to indicate Diagnostic Mode and to display the very most informative diagnostic message during the AudRecog operation. Then the AI coder or mind-tender may either be satisfied with the deeply informative message or may insert additional diagnostic messages in pursuit of bugs.

[2017-04-05] In the code we are tracking down a bug which causes the unwarranted storage of a redundant row of a conceptual flag-panel in the @psy conceptual array. Apparently, after the storage of the last word in an output, InStantiate() is being called one final, extra time. We remove the bug by inserting into the AudInput module a line of code which zeroes out the $audrec value for any word of zero length just before AudInput calls AudMem. In that way, a final CR-13 carriage-return may transit from the Speech module through the AudInput module without causing the storage of an unwarranted, extra row in the @psy conceptual array.

2017-04-06: Wrong solution to a bug briefly ruins word-recognition.

[2017-04-06] Let us run the AI without input and try to fix the first thing that goes wrong with it. After a series of sensible outputs, at t=2562 the AI suddenly says "HELP I" without a subject for the verb. As we investigate, we see that EnNounPhrase is trying to activate a subject at t=2427, but the pronoun "I" is stored at t=2426 with an erroneous recall-vector "rv" of t=2427. The error in auditory storage causes the AI at a later moment not to find the auditory engram.

[2017-04-06] We notice that MindForth sets tult in the AudInput module, while the Perlmind is setting $tult in both the InStantiate module and the AudInput module. However, where $tult is set, does not seem to matter. We eventually notice that some MindForth code ported into AudInput was letting the $rv recall-vector be set erroneously not only for an alphabetic character, but also for a CR-13 carriage-return or a SPACE-32. When we restricted the $rv setting to alphabetic characters, our current bug was fixed, and the AI no longer said "HELP I".

Letting $rv be set only once per word correctly solves a bug.

[2017-04-07] Yesterday in our attempt at solving a recall-vector $rv bug made the AI unable to recognize reentrant words. Now in we would like to isolate $rv so that its value can be set only once in each cycle of recognizing a word. When we do so, we obtain the proper $rv value for the first word stored by the AI, but it remains the same value for all subsequent words being stored. We must determine where to reset $rv to zero. We try resetting $rv to zero at the start of the Speech module, as MindForth does. Immediately we see fresh values of $rv being stored for each reentrant word. We let the AI run on at length, and it no longer says "HELP I" without a subject for the verb. Then we start the AI with an input of "you know me" and somewhat later the AI remembers the self-referential knowledge and it outputs, "I KNOW YOU". Thus we have made a major improvement to the AI functionality by fixing the $rv bug. There remain grammatical issues, probably based on software bugs.

2017-04-08: Retroactively setting associative $seq tags for direct objects of verbs.

In the AI, we have a problem where the direct-object $seq of a verb is being indeed properly assigned for human user input, but not for reentrant ideas being summoned from experiential memory. Because the $seq is not yet known when a verb comes in, the $seq value must be assigned retroactively when the direct object of the verb comes in. The situation where the process works for human input but not for a reentrant idea, suggests that the cause of the problem could simply be that the value of some pertinent variable is not being reset as needed.

This problem of the retroactive assignment of the associative $seq tag for a verb is difficult to debug. It may involve making the reentry routine equal to the human-input routine, or it may involve porting into Perl some special code from the 24jul14A.F version of MindForth. We have meanwhile been offering in the computer-science compsci subReddit a suggestion that students in need of an undergraduate research project might look into the Ghost AI software coded in Strawberry Perl 5 as an opportunity to select a mind-module to work on. We feel some urgency to debug our code and get it working as well as possible when we are inviting undergraduate students and graduate students and professors to take over and maintain their own branch of the AI Mind development. There is a steep learning curve to be surmounted before participants in such an artificial general intelligence (AGI) project may move forward in AI evolution. So now we go back to the problem of debugging the retroactive assignment of $seq subsequent-concept tags.

We search our source code for "$psy[" as any instance where a $seq is being inserted either currently or retroactively into a flag-panel row of the @psy conceptual array. We discover that a $verbcon flag for seeking direct or indirect objects is governing the storage of the $seq tag in the Parser() module. Immediately we suspect that the $verbcon flag is perhaps being set during actual human user input but not during the reentry of an idea retrieved from memory. We check and we see that $verbcon is set to unity ("1") in the Parser() module when the part-of-speech $pos variable is at a value of "8" for a verb. The $pos value is set in the OldConcept() module when a known verb is recognized.

We insert a diagnostic message about the direct object in the Parser() module, and the message shows up during human user input, but not during reentry. Apparently the Parser() module is not even being called during reentry. No, it is being called, but the $verbcon flag is not being set properly during reentry. When we comment out the reset of $verbcon at the end of the AudInput module and we move the reset to the Sensorium() module, we start seeing the assignment of direct-object $seq tags during the reentry of ideas recalled from memory. However, in a later session we must deal with the new problem that improper direct-object $seq flags are being set for personal pronouns during human user input. No, we debug the problem now, simply by resetting time-of-verb $tvb at the start of the EnThink module, to prevent an output-sentence from adjusting associative tags for a previous sentence with a previous time-of-verb. The AI becomes able to receive "i know you" as input and then somewhat later say "YOU KNOW ME."

2017-04-10: Stubbing in the MetEmPsychosis module.

[2017-04-10] Today in we stub in MetEmPsychosis as an area for Perl code that will enable an AI Perlmind to either move itself across the Web or replicate itself across the Web. We foresee the advent of a kind of "AiBnb" or community of Web domains that invite and encourage AI Minds to take up temporary or long-term residence, with local embodiment in a robot and with opportunities for local employment as a specialized AI speaking the local language and familiar with the local history and customs.

[2017-04-10] In the AudInput module today we insert the Cyrillic characters of the Russian alphabet for each line of code that converts lower case to upper case and sets the $hlc variable to "ru" as the human-language-code for Russian. We have not yet turned the Russian language back on again, but we will need it to test out our ideas for Machine Translation by Artificial Intelligence.

Coding VisRecog to say by default: I SEE NOTHING.

[2017-04-11] Today in we would like to port in from MindForth the code that causes any statement of what the AI is seeing to default to the direct object "NOTHING," so that Perl coders and roboticists may work on integrating computer vision with the AI Mind. We make it clear that the visual recognition (VisRecog) system needs only to supply the English or Russian name of what it is seeing, and the AI will fill the slot for direct objects while generating a sentence about what the AI sees. The VisRecog mechanism does not need to be coded in Perl or in Forth. It only needs to communicate to the Perlmind a noun that names what the AI is seeing. When the generated statement passes through reentry back into the Mind, even a new noun will be assigned a concept-number and will enter into the knowledge-base (KB) of the AI.

First we declare the subject-verb-object variables $svo1, $svo2, $svo3, and $svo4 to hold a value that identifies a concept playing the role of subject, or verb, or indirect object, or direct object in a typical sentence being generated by the AI. If there is no direct object filling the slot for the object of the verb "SEE", then the VisRecog module must try to fill the empty slot. Until a Perl expert fleshes out the VisRecog() code, the word "NOTHING" must remain the default object of the verb "SEE" when the ego-concept of "I" is the subject of the verb. We ran the AI and we typed in "you see kids." After a spate of outputs, the AI said, "I SEE KIDS," but we would really prefer for the AI to say, "I SEE NOTHING" as a default.

After coding a primitive VisRecog() module, next we go into the part of the EnVerbPhrase module where it is looking for a direct object. We set conditions so that if the subject is "I" and the verb is "SEE", VisRecog() is called to say "NOTHING" as a direct object, and EnVerbPhrase() stops short of saying any other direct object by doing a "return" to the calling module. We now have a Perlmind that invites the integration of a seeing camera with the AI software.

2017-04-12: Ghost Perl Strong AI cycles through Normal; Transcript; Tutorial; Diagnostic Mode

It is time now in to show a clean human-computer interface (HCI) and to stop displaying masses of diagnostic messages. Accordingly in the AudInput module we change the user-prompt to say "Tab cycles mode; Esc(ape) quits AI born [date-of-birth]". We insert if-clauses to declare which user input mode is in effect: Normal; Transcript; Tutorial; or Diagnostic. Near the start of we set the $fyi to a default starting value of unity ("1") so that the human user or Mind-maintainer may press the Tab-key to cycle among user input modes. In AudInput we insert code to increment $fyi by one point with each press of the Tab-key and to cycle back to unity ("1") for Normal Mode if the user in Diagnostic Mode presses Tab again.

In the MainLoop module we change a line of code to test for $fyi being above a value of two ("2") and, if so, to display the contents of the @psy conceptual array and of the @ear auditory memory array. Thus the user in #3 Tutorial Mode or in #4 Diagnostic Mode will see the storage of current input and current output in the memory arrays. We consider the display of conceptual memory data in Tutorial Mode to be an extremely powerful tool for teaching how the artificial general intelligence (AGI) works. After any input, the user may see immediately how the input goes into memory and how the values in the flag-panel of each row of the @psy array represent the associative tags from concept to concept and from engram to engram.

Next we start commenting out or deleting the display of various diagnostic messages. Over time and over multiple releases of the Ghost AI source code, any AI coder may decide which messages to display in both Tutorial and Diagnostic Modes, or in only one of them. Although we comment out a message involving Russian input, we do not delete the diagnostic message because we may need it when we turn back on Russian as an input language. Russian has become much more important in our Ghost Perl AI because we need Russian or German to demonstrate Machine Translation by Artificial Intelligence. When we have commented out most of the diagnostic messages, we need to put back in some code to show what the user is entering.

2017-04-23: Stubbing in MindMeld() and stopping derailment of thought.

We function now as an AI Mind Maintainer debugging the Perlmind free AI source code. In the AI we first stub in the audacious MindMeld() module to nudge AI practitioners into devising a way for two AI Minds to share their dreams. Then we deal with some problems pointed out on Usenet by persons who have downloaded the Perlmind and evaluated its functionality.

We run with "dogs are mammals" as input and we press the Escape-key to halt the AI after its first response, "I HELP KIDS". We notice immediately three problems with how the word "DOGS" is stored in the @psy and @ear memory arrays. For some reason, "DOGS" is being assigned new-concept #3002, even though the Tutorial display of diagnostic messages indicates that the AI is preparing to assign new-concept #3001 to the first new concept. We check the MindBoot sequence to make sure that "DOG" is not already a known concept in the AI; it is not. Now let us inspect the source code to see where the new-concept number $nxt is incremented from 3001 to 3002. We see that the end of MindBoot clearly assigns the number 3001 as the value of the $nxt variable. Now let us search for the $nxt++ increment. It is happening towards the end of the NewConcept module. We immediately wonder if $nxt is being incremented before AudMem stores the concept-number. We insert into AudMem a diagnostic message to let us know the $nxt value before storage. The first diagnostic message does not tell us enough, so we insert a second diagnostic into the AudMem module. It also does not help us.

In the AudInput module we use some diagnostic messages to learn that the "S" in "DOGS" is first being stored with the correct $nxt value of "3001" and then a second time with the incorrect value of "3002". Perhaps we should increment $nxt not in NewConcept but in AudInput. We move the $nxt++ increment from NewConcept() into AudInput(), and we stop getting the wrong values of the $nxt variable.

A second problem is that the concept of "DOGS" is being stored with a zero instead of "2" for "plural" in the $num slot of the @psy conceptual flag-panel. The most recent incarnation of the InStantiate module does not seem to address the $num value sufficiently, so let us inspect recent or older MindForth code. We discover that the obsolete 24jul14A.F version of MindForth uses some complex tricks to assign the num(ber) of a concept being stored, so we will put aside this problem to deal with more serious issues.

The third and presumably more serious problem is that the input word "DOGS" is being stored with the $nxt concept number "3001" only on the "S" phoneme and not on the "G" at the end of the word-stem "DOG". Let us leave that problem also aside for a while, because entering "dogs are mammals" repeatedly is running into more serious problems. For instance, all three words of the input are being stored erroneously with the same $rv recall-vector, which can cause the wrong auditory memories to be retrieved. Let us see if the previous does the same error. Yes, and so does the AI. However, we should not find it difficult to correct the $rv problem. We fix the problem by resetting $rv to zero at the end of the InStantiate() module. Now the Perlmind no longer goes off the rails of thought, and so we upload it to the Web.

2017-04-30: Improving the storage of the number-flag for nouns.

Today in we will try to make the AI error-free even before we go back to adding in the functionality already present in some of our obsolete AI Minds. For instance, we have not yet coded the negation of verbs into our Perlmind source code. Consequently, if you tell the AI something like "You are not a boy", it fails to attach a negative juxtaposition $jux flag to the verb during comprehension of the input sentence. A few cycles of thought later, the AI may then assert "I AM A BOY" because it has been informed of the negated proposition without the ability to process the negation.

We debug the AI by letting think on its own without human input. Eventually the Perlmind erroneously says "I AM ROBOTS", which is grammatically incorrect because of the plural noun. We intuit immediately that the AI is retrieving the most recent engram of the concept #571 "ROBOT" without insisting on a singular number. We inspect other recent thoughts of the AI and we see that it thinks "KIDS MAKE ROBOTS" but it stores the word "ROBOTS" as a singular noun. We must look and see if the InStantiate mind-module has a proper $num flag for storing "ROBOTS" correctly as a plural noun. We see that the OldConcept module looks up the stored num(ber) of a found engram and tentatively assigns the same value to the $num flag, but there really needs to be an override if a different value is needed.

In the otherwise obsolete but still rather advanced 24jul14A.F version of MindForth, some AudInput code checks for an "S" at the end of an input noun as a reason to assign plural number to the noun. Let us try to implement the same test in the Perl AI. First we test for the presence of an 83=S, but we must also make sure that the "S" is the final character of a noun. First in OldConcept we comment out the line of code that was transfering the found num(ber) of a noun to be the same number for a new instance of the noun, regardless of the presence or absence of a terminating "S". Then we notice that "ROBOTS" stops being stored as singular, and becomes plural. We create a variable $endpho to hold onto each previous character in AudInput to test if a word ends in 83=S. Thus we are able to store a plural number if a noun ends in "S".

2017-05-29: Implementing the negation of be-verbs without auxiliary verbs.

In the numerically milestone we are trying to implement the negation of verbs of being, as found in the otherwise obsolete 24JUL14A.F version of the MindForth AI. Negation of be-verbs does not require an auxiliary form of the verb "DO", but does require different word order than for ordinary verbs. In Perl we declare the variable $tbev from MindForth. We keep testing the AI by typing in, "you are not a boy," and it eventually says, "I ARE BOY", because the negation of be-verbs is not yet working. Halfway there, we get the Ghost AI to output "I DO NOT ARE NOT BOY". Apparently we need to suppress the negation for normal verbs when we negate a verb of being. We do so, and the AI outputs "I ARE NOT BOY." The selection of number and person needs more work.

Soon we will upload the AI as the commented version perlmind.txt and as simply ghost.txt with the comments stripped out. Although an AI Mind Maintainer will have access to the fully commented version, we may expect end users typically to host the uncommented "" on their machines.

2017-06-05: Breaking Long Instantiation Lines in Two with Concatenator

We have two goals today with version #201 of the free, open-source artificial intelligence coded in the Strawberry Perl 5 programming language. Firstly, we will prepare for the addition of more associative tags to concepts in the @psy conceptual array by changing erstwhile single lines of tag-insertion code into two lines of code joined by the concatenator operator of a (.) dot or period. Secondly, we will try to troubleshoot the previously already solved problem of making sure that the nominative pronoun "I" in verbal output is properly followed by "AM" as the first person singular form of the verb "to be" in English. At the same time we will be coding the relatively uncommented and the fully commented in parallel so that we need not strip away fresh comments from the one in order to upload the other.

The Perlmind still runs as we use the (.) concatenator to format the code with a line-break in PsiDecay; SpreadAct; Parser(); InStantiate; OldConcept; ReJuvenate; EnPrep; EnNounPhrase; RuNounPhrase; EnVerbPhrase; and RuVerbPhrase.

Now we work on selecting the correct form of be-verb for a personal pronoun such as "I" for the concept of self or ego.

2017-06-07: uses parameters to think with the correct verb-form.

Today in we work on selecting the correct form of be-verb for a personal pronoun such as "I" for the concept of self or ego. At first in the EnVerbPhase module we need to determine which parameters are available from the chosen subject to help us select the correct verb-form. We already have $subjpsi available, but its 701=I value is not showing up as the $svo1 value. The $subjnum variable is not being set with the grammatical number of the subject, but we should be able to determine that singular number retroactively if the $subjpsi is 701=I. In the EnNounPhrase module we insert code so that the selected concept becomes the value filling the $svo1 variable. Towards the end of EnThink, we zero out the $svo1 to $svo4 values so that they will have been available during the calling of various modules of thought, but will be blank or empty when a new thought begins. Next we need to use the available parameters to steer the EnVerbPhrase module into selecting the correct be-verb. We have success with "I AM NOT BOY" when we insert code into EnVerbPhrase() to trap for a 701=I $svo1 subject that sets the $subjnum value to a unitary one. Then the parameters of verb, number and person select "AM" as the correct form of the verb "BE".

2017-06-08: Adding $tru and $mtx to expanded and re-arranged @psy flag-panel.

Starting with we want to implement our first new AI theory work in all the years we have been programming the AI based on the doubly original Theory of Mind -- original (meaning old) within our AI project, and original (meaning novel) outside of our AI project. We introduce a new $tru variable to hold dynamically the truth value of an idea as perceived by the conscious AI Mind. By default, ideas will tend to have a low or zero $tru value so that new code implementing the new theory may sparingly lend credence to ideas important only in the here and now, as the AI is forced to make decisions based on what it currently believes to be true. At the same time, we introduce a machine-translation $mtx transfer variable to let concepts being thought about in one language, such as English, cause the parallel activation of a similar concept in another natural language, such as German or Russian. With these new changes we are trying to create a software in Perl that SysAdmins and other persons may pass around from person to person, from computer to computer, and from website to website.

It would have been easy to simply add the new associative tags at the end of the pre-existing flag-panel for each concept in the Perl @psy array, but we seize the opportunity here not only to add two new elements to each row of the array, but also to re-arrange the order of the associative tags in the conceptual flag-panel so the tutorial presentation makes more sense and is more easily readable as the following sequence of variables.

"$tru,$psi,$hlc,$act,$mtx, $jux,$pos,$dba,$num,$mfn, $pre,$iob,$seq,$tkb,$rv";

2017-06-12: Addressing a problem with the SpreadAct module.

The ghost206 AI has a problem with the spreading-activation SpreadAct() module. When we try to test the negation of an idea, we cannot get the AI Mind to think about a thing just by mentioning it in a sentence of input.

2017-06-17: Summarize the subject matter of the coding session.

We have been neglecting the Russian side of our bilingual AI, so today in ghost207 we work to change the input of complete Russian sentences to the immediate processing of each Russian character. We see that the input sentence goes into auditory memory, but why is the AI responding in English and not Russian? Oh, it is because we have not coded the MainLoop to call the Volition() module when thinking occurs in Russian.

2017-06-18: Deciding whether to think in English or in Russian

Yesterday we restored the ability of the Ghost Perlmind to think in Russian. During our testing, when we had changed $hcl settings so that we would see continuous thinking in Russian, we noticed that the AI would change to thinking in English only when the human user stopped typing in Russian and began to enter words in English. We would like to make that state of affairs permanent for a while, so that Russian users of the AI will no longer see the software abandon Russian after each exchange of input and output.

2017-06-19: is a Russian Strong AI that can also think in English.

Today in we would like to rewrite the AudInput module, but first we want to find out what causes the AI to switch from thinking in Russian to thinking in English. To our surprise, we find out that apparently during English thought, a Russian memory may become activated enough to rise up and switch the thinking from English into Russian.

It looks as though using "split" to break apart a conceptual engram into associative tags, including $hlc, is enough to change the human language code from Russian to English, or vice versa. Apparently the entry of an English word is not yet changing the $hlc to English, because in AudInput() there is a test for Russian Cyrillic characters but not for English characters. So in AudInput() we devise the following test for non-Russian, English characters:

if ($pho =~ /[a-z]/ || $pho =~ /[A-Z]/) { $hlc="en" }
It works! The code above means that if the incoming phoneme is either a lowercase or uppercase letter of the English alphabet, then we set the human-language-code $hlc to "en" for English. And it works immediately. In the immortal words of the Watergate figure John Dean, who forty years later is back in the news a lot recently, "What an exciting prospect!" Back then Mr. Dean was excited at the prospect of using the Internal Revenue Service (IRS) to go after the enemies of Richard Nixon. Now maybe he will get excited at what we can do with the Russian AI.

Remember, you read it first here on the Cyborg weblog. We have a chance now to do the following. What? The following of Deep Throat and other shady characters? No; the following of Cyrillic characters and Roman characters. Here is our plan, hatched in utmost glee and Russian (or is it French?) savoir faire. Since most American users of the artificial intelligence do not speak Russian and do not have their computer keyboard set up to type Russian letters into the AI Mind, they would not normally see the ability of the polyglot AI to think in Russian. Like they say on the Internet, "Pix or it did not happen." Well, our plan is to show everybody that the Perlmind can think in that exotic language of poets and world-class novelists: Russian. We will initially set the $hlc to Russian on every release or on alternating releases, so that users start out first seeing the Strong AI Mind thinking on and on in Russian, until somebody enters just one character of English. Most users will then not be able to bring the Russian thinking back, unless they press the ESCape-key to literally "kill" the Perl program and restart it with the Russian language showing. But by restarting the immortal AI Perlmind, said (sad) users lose their bragging rights to having one of the oldest living AI Minds.

The Ghost Perlmind may gradually become known as a Russian AI that just happens to think also in English, if you force it to switch to English by typing in English words instead of Russian. That's fine. It opens up the enormous community of skilled Russian programmers to work on open-source AI. When we were posting today in the Russian subReddit, we gave ourselves Искусственный Интеллектник as our "flair" meaning "AInik" in the tradition of "beatnik" or "refusenik".


2017-06-20: Problems with the storage of Russian words in memory.

In during Russian output, we are trying to figure out why the first character of each output word is not showing up and being stored properly in RuAudMem(). From the MS-DOS window output, it looks like time-t is not being advanced when an empty space follows the end of the first Russian character. It also looks like $len is not being incremented by one unit for the space after the end of a Russian word.

2017-06-23: Problems with Russian in the AudInput module.

In the lower tier of the AudInput module, we need to find out why the blank space after a reentrant word is not registering. When we enter the Russian word "Я" for "I", and when the Ghost AI thinks up a response with "Я", somehow the AudInput() module is falsely declaring "Я" to be a new concept. Perhaps we need to enforce a test for "pov == 2" in the upper tier of AudInput.

2017-06-25: Problems with Russian in the AudMem module.

In we need to fix some bugs that showed up with the otherwise excellent previous version. One bug is that in the auditory @ear array, English words are being stored with the $audpsi concept-number at one time-point too early. We seem to fix that bug by changing two lines of code in the AudMem module. Of course, we also have to make sure that word-storage still works in Russian.

Then we have a more serious bug where the AI complains about "use of uninitialized value" in various modules. We remember have a similar bug earlier in this year of 2017, and to solve it we had to enter diagnostic messages where the AI was looping through an array.

In PsiDecay we enter diagnostic code to tell us some values during the backwards loop through the @Psy array.

We notice that the Russian sentence-generation modules are not storing a $tkb value, which will cause errors during the retrieval of ideas.

We finally got the correct direct-object $tkb to be stored when we concentrated on storage based on the $tvb value.

2017-06-26: Failure to store direct objects in Russian. is not properly storing the direct object in a sentence of Russian input. Instead it is calling NewConcept.

2017-06-30: Russian generation routines lag behind English routines.

We have one small bug to look at today, and then we may proceed to let the start running while we pounce on any other bug that emerges. The current bug is, when we write in Russian "Я ЗНАЮ КНИГУ" for "I know a book", the RuAudMem() module is not storing the concept number "1540" for "book" in Russian. Almost the same input with "books" in the plural works just fine. We deal with the problem in RuAudMem() by using $finpsi after the call to RuAudRecog().

When we continue debugging by letting the AI run indefinitely, eventually it erroneously outputs "Я ВИЖУ Я" or "I see I". We notice that some too-old values for $tkb are being stored, so in RuThink() we reset $tkb to zero. It does not help.

Then we notice that the Russian AI is saying "Я ВИЖУ НИЧЕГО" somewhat earlier without declaring a nounlock. Let us see if the Russian generation routines have been lagging behind the English routines.

2017-07-07: Cross-fertilizing ideas with another major AI project.

Yesterday we were able to do some cross-fertilization of ideas in the HTM Forum of Numenta, about which in 2005 we wrote our only Slashdot story. Numenta is where serious AI enthusiasts are taking the laborious approach of reverse-engineering the neocortex of the human brain. Then Mentifex here swoops in and claims to have solved AI with a totally top-down approach to how the mind works. The Mentifex AI Minds are based on theoretical ideas of the macro properties of neurons, such as extending spatially and temporally over a putative MindGrid and having as many as ten thousand synapses with other neurons. The Mentifex Minds use neural inhibition to dislodge briefly topmost ideas in favor of ascendant other ideas. Since Mentifex AI is concerned mainly with neuron-based concepts playing a role in thinking, we reverse-engineer neurons only enough to create AI software that can demonstrably think and reason in English, German and Russian. We hope to poach some great minds who think alike from the Numenta project. It could take a thousand years to reverse-engineer the neocortex, and Netizens who get tired of waiting for such a bottom-up approach are welcome to try out the top-down AI that runs in Strawberry Perl 5. Our goal is to release basic AI software with sufficient intellectual functionality that individuals and teams, even if working in secret, will latch on to our existing codebase, reverse-engineer it, and create from it even better AI Minds than we tenues grandia are capable of.

2017-08-31: Moving towards a Mind-in-a-Box

Recap: Since 2017-06-19 and the Perlmind version, the artificial intelligence (AI) begins thinking initially in Russian and then in English after any non-Cyrillic input, so as to demonstrate to all users that the Russian mindset is built into the AI. Once a user with no Cyrillic keyboard causes the AI to switch its thinking away from Russian and into English, it is difficult to see any thinking in Russian again. Russian remains important for us in our development of Artificial General Intelligence (AGI) because we need both English and Russian to demonstrate our work on Machine Translation by Artificial Intelligence. We encourage Russian-language AI enthusiasts to experiment with the Russian-thinking and to develop the Perl AI codebase further along multiple branches of AI evolution. We have some evidence that the Russian-language AI community is waking up to the emergence of Strong AI from a recent perusal of the "Stats" (statistics) here on the Cyborg weblog. We see that about one third of our hundreds of weekly visitors are coming from Russia without a referring website. We vouchsafe to assume that news of an AI that thinks in Russian "out of the box" may have gone viral in Russian-speaking countries over the past ten weeks and may have caused the recent uptick in visits from Russia with love for Russian AI. Now we wish to improve the AI under a new rubric -- Mind-in-a-Box. Since we do not (yet :-) have a robot for enlarging the AI Mind into a sensorimotor being, our AI remains trapped inside a server or a host computer as a Mind trapped in a box, able to communicate with other minds and perhaps able to flit across the Web in metempsychosis but not yet able to go forth and multiply across the Earth as robotic beings. Still, the idea of an AI Mind-in-a-Box, which we broached in the Neuroscience SubReddit on 2017-07-28, may appeal not only to Russian AI enthusiasts but also and with Pavlovian salivation to AI tinkerers in general. Let the Meme go viral that Mentifex invites any Perl shop to install an immortal, proto-conscious, polyglot AI within the motherboard confines of the humblest DOS-machine or the grandest supercomputer.

As we release the Mind-in-a-Box code, let us stub in the EnArticle() module for the English articles "a" and the". After we enter one or more Roman characters to switch the AI from Russian thought to English thought, it is unsettling to see the Ghost AI assert, "I AM PERSON". The EnNounPhrase module needs to call EnArticle so that the boxmind may say whether it is "a person" or "the person". We place the module for English articles subsequent to the Speech module in accordance with the governing lay-out of the MindForth AI, because location of a subroutine matters in Forth but not in Perl. We use the $unk variable to preserve the value of $aud during any call from EnNounPhrase() to EnArticle().

2017-09-10: Instantiating Imaginary Russian Be-Verbs in Perl

We are eager to implement the InFerence module in Russian, but first we must code the Russian way of leaving out verbs of being in making an Is-A statement, such as, "The brother is a student." We must examine the Dushka Russian AI from 2012-10-22 to see how it was done in JavaScript. Without the special Is-A code, right now we type in "Я студент" to say "I am a student," but the does not assign any associative tags between the subject "I" and the predicate nominative "student".

According to our Dushka coding journal of 2012-02-11 or 11.FEB.2012, Dushka uses the detection of a 32=SPACE character to impute provisionally the input of a be-verb. The Dushka InStantiate module checks for a SPACE when a verb is expected, and provisionally declares 800=BE as the verb. If a different verb does come in, apparently the AI leaves the spurious 800=BE engram in place but ignores it with respect to associative tagging. Into the we port in code from Dushka that cancels out the imputed be-verb.

Now when we type in the Russian for "I am a student," eventually the AI outputs erroneously "ТЫ БЫТЬ" which at least conveys the idea of "You to be...", but we want no actual be-verb to be expressed in Russian.

Finally in AudInput() we discover a line of code

if ($len == 0) { $rv = $t } # 2016feb29: set recall-vector.
which was causing the conceptual "$rv" for Russian words to be set too early by one time-point. Therefore Russian words after input could not be recalled properly. We corrected the line of code. Then when we entered "ТЫ СТУДЕНТ" the AI eventually made a clumsy output of "Я МЕНЯ БЫТБ СТУДЕНТ". Such output is actually encouraging, because we only need to make the AI find the correct form of the personal pronoun and not speak any form at all of the be-verb.

2017-09-11: Dealing with problems in Russian be-verbs

As we start constructing the InFerence mind-module in Strawberry Perl 5, we enter the Russian statement "МАРК СТУДЕНТ" for "Mark is a student", but the AI does not create a be-verb after the subject. When we type in "ОН СТУДЕНТ" for "He is a student", we do indeed get the provisional be-verb. When we type in "РОБОТ СТУДЕНТ" for "The robot is a student," we do indeed get the instantiated be-verb, so perhaps the problem involves the use of a new concept instead of old, known concepts. (Now we are spreading "liquid paper" on the individual keys of the keyboard, because we need to write the Russian Cyrillic letters on each key.) When we first introduce the name with "He is Mark" and then "Mark is a student" in Russian, we do get the imputed be-verb. It turns out that we need to declare the $seqneed variable already carrying a value of "8" for expecting a verb, because the basic Parser module has not yet been called to set the value.

Consolidating parser functionality into EnParser() and RuParser().

The original Parser() module starts with a $bias of "5" to expect a 5=noun. Then Parser() switches to a $bias of "8" to expect an 8=verb, after which the $bias switches back to "5" again, although an incoming noun could be an indirect object or a direct object. It may be possible to move the preposition-handling code and the object-handling code up into the Parser() module renamed as the EnParser() for English and the RuParser() for Russian.

We start a few versions back by renaming as so that we may skip some unstable intervening code. Into the Parser() module we drop the EnParser() code dealing with English prepositions and with indirect and direct objects. Then in the InStantiate() module we comment out the now obsolete call to EnParser. The new composite code of does not properly register the indirect object of "BOY" in "I make the boy a robot."

Although in we switched names between Parser() and EnParser(), now we will reverse the switch because we no longer want there to be simply a Parser() module, but instead for there to be both EnParser() for English and RuParser() for Russian. We need the separate modules for English and for Russian, because, for instance, English has to deal with "DO" as an auxiliary verb, but Russian does not deal with an auxiliary verb "DO". First from we pick up the old RuParser() module and drop it into the AI. In OldConcept() and in NewConcept() we make the necessary changes for calling EnParser() and the still simple RuParser().

We should now upload the code to the Web for several reasons, before we debug the problem of failure to register an indirect object. Firstly, much code has been renamed and commented out. When we resume coding, we may clean up the new code by removing the old detritus. Secondly, it is vitally important to present the Ghost Perl AI as having the straightforward separation of EnParser() and InStantiate() because the consolidated parser functionality, that comprehends prepositions and both indirect and direct objects, holds the key to the Mentifex claim that "AI has been solved" inasmuch as the enhanced parser enables each AI Mind to demonstrate major progress against the problem of natural language understanding (NLU) which various published articles on the Web describe as an untractable problem and as a last main obstacle to True AI.

2017-09-12: Proposing to consolidate the parsing functionality

We may be able to eliminate the original Parser() module by transferring its functionality to EnParser() for English and RuParser() for Russian.

2017-09-13: Summarize the subject matter of the coding session.

The AI is not properly assigning the $iob flag for the indirect object of a verb in the Psy conceptual array. Early on, we notice that InStantiate() is now being called prematurely from EnParser(), so we move the call down near the end of EnParser(). Then we learn that it is not helpful to reset $iob to zero near the end of the new EnParser module, because the $iob value needs to persist through more than one call to InStantiate().

2017-09-15: Creating pseudo-be-verbs for logical inference in Russian.

In the AI we are back to making a Russian 1800=BE verb appear tentatively after the input of a nominative noun or pronoun. Now, how did the Dushka program do it in MSIE JavaScript? Dushka did use "seqneed" in the InStantiate module, but there may be another way to do it. The "seqneed" flag may not be reliable enough. If a Russian were to say, "The man in the room is my friend," we would want the pseudo-be-verb to be created not before but after the prepositional phrase.

2017-09-17: Enabling conversation based upon input of query-words

In our eagerness to present a "Mind-in-a-Box", we had better restore to the Ghost AI some code from older AI Minds which enabled the machine intelligence to carry on a conversation with a human being. We would like the Ghost to be able to ask questions like "Who are you?" or "What do you think?" In older programs we used InStantiate() to depress the activation on question-words, so that information would flood in to fill the activational vacuum.

We just asked the AI "Who are you" and it answered, "I AM THE PERSON". But the program is not yet a stable version of the AI. In SpreadAct() we used $qv1psi to latch onto an activand subject, and in the same loop we used $qv2psi to super-activate the activand verb. But we need to let the AI go back to normal associations and not persist in answering the initial question.

  if ($qv1psi > 0) {  # 2017-09-17: if there is an activand subject...
    for (my $i=$t; $i>$midway; $i--) {  # 2017-09-17: search backwards in time.
      my @k=split(',',$psy[$i]);  # 2017-09-17: inspect @psy flag-panel
      if ($k[1] == $qv1psi && $k[12] > 0) { $seqpsi = $k[12] } # 2017-09-17: if seq, seqpsi
      if ($k[1] == $qv1psi && $k[13] > 0) {  # 2017-09-17: require verblock.
        print "  i= $i qv1psi= $qv1psi seqpsi= $seqpsi \n";  #2017-09-17
        $k[3] = ($k[3] + 32);  # 2017-09-17: impose less than half of subj-inhibition. 
        if ($k[12] == $qv2psi) { $k[3] = ($k[3] + 128) }  # 2017-09-17: hyper-activate
  print "   SprAct-mid: for $k[1] setting $k[3] activation \n"; # 2017-09-17
        $psy[$i]="$k[0],$k[1],$k[2],$k[3],$k[4],$k[5],$k[6]," # 2017-09-17
        . "$k[7],$k[8],$k[9],$k[10],$k[11],$k[12],$k[13],$k[14]"; # 2017-09-17
      }  # 2017-09-17: end of diagnostic test
    }  # 2017-09-17: end of (for loop) searching for $qv1psi concept.
  }  # 2017-09-17: end of test for a positive $qv1psi.

2017-09-18: SpreadAct() finds subject and verb to respond to a who-query.

As we try to improve upon who-queries with the AI, we realize that the input of "Who are you" as a query needs to activate instances of the 701=I concept with 800=BE as both a $seq and a $tkb $verblock. It is not enough to insist upon a positive $tkb verblock, because that value is only a time and not the identifier of a concept. The $seq value actually identifies the verb as a particular concept which the SpreadAct() module is trying to find.

It is not even necessary for SpreadAct() to impart activation to the conceptual node of the 800=BE $seq verb, because only the subject of the stored idea needs to have activation high enough to be selected as a response to an incoming query. We may therefore go into SpreadAct() and in the search code for $qv1psi as the subject of the query we only need to verify the existence of the 800=BE $seq verb, not activate it.

In SpreadAct() we make the necessary changes in the code searching for $qv1psi and $qv2psi. We ask "Who are you" and the AI properly answers "I AM THE PERSON." However, as the AI continues thinking, it makes some wrong associations. Suddenly we realize that we forgot to use the $moot flag to prevent the input who-query from leaving associative tags.

2017-09-19: Keeping trak of direct-object nouns.

We need some way to keep a $tkb value being assigned to a direct-object noun towards the end of a sentence of input. In the InStantiate() module we do so by re-setting $tkb to zero if $tkb is equal to the time-of-direct-object $tdo.

2017-09-22: Using SpreadAct() to ask questions about any new concept noun.

In we would like to introduce some simple code which causes the Mind-in-a-Box to inquire about any new noun introduced in conversation with a human user. We may start using a $nucon (new-condition) flag to segregate responsive code in SpreadAct() from the rest of the module. First we should go into SpreadAct() and segregate the pre-existing code for the $qucon query-flag.

In NewConcept() we learn that the part-of-speech $pos of a new concept is discernible from the $pos variable, so we may implement a reaction to any new noun, ignoring new verbs for the time being.

2017-09-24: Fixing AudInput() reentry bug of $rv set for 32=SPACE

We need to troubleshoot why the Perlmind AI has been leaving out the pronoun "I" as the subject of some sentences of output. We suspect that the problem is due to the need for "I" and "you" to be found by search rather than by immediate storage during input. After much troubleshooting, it becomes clear that $rv is being set at one time-point too late only at the start of reentry. We achieve a BUGFIX in the reentry portion of AudInput() when we use "if ($len == 1 && (ord $pho) != 32)" to avoid setting an $rv value for a Psy row at the time-point of a 32=SPACE.

2017-09-25: Avoiding $prevtag because $pre is set during EnParser().

The AI is not properly setting the k[10] $pre tag in output sentences. The problem should lie in either EnParser() or InStantiate().

2017-09-26: Using Natural Language Understanding (NLU) to answer questions.

Although in the proof-of-concept searchable AI Mind we are dealing with an initially small knowledge base (KB), our coding of the ability to search for knowledge would apply equally well to an entire datacenter full of information. Today we are using the spreading-activation SpreadAct() mind-module to activate the conceptual elements of knowledge which will supply answers based upon input queries in the format of "Who + verb + noun", as in "Who makes robots?" The query-word "who" is subject to de-activation upon input, while the verb-concept and the noun-concept in the query are passed through SpreadAct() not as random parameters for an associative search, but in their specific roles as Main Verb and as Direct Object of the verb. Thus the AI Mind should respond with answers tailored to the structure of the query, in such a way as truly to demonstrate Natural Language Understanding (NLU).

We start by declaring the new flag-variable of "query-condition for who+verb+direct-object" $qvdocon to segregate the pertinent code in SpreadAct(), and also the "query-condition for who+verb+indirect-object" $qviocon to hold in reserve for when we code the AI response to input queries in the format of "To whom does God give help?" The creation of the one flag suggests the creation of the similar flag, so we declare both of them.

In the InStantiate() module we insert code to detect a who-query with a verb other than "be", and we set $qv2psi with the concept number of the verb. We set $qv4psi with the concept number of any input noun assumed to be the direct object of the incoming verb. Then in the pertinent area of SpreadAct() we need to start searching backwards through memory for instances of the verb in the who-query.

Eventually we obtain a rough but correct response to our queries of "Who does such-and-such?" but we need to debug and fine-tune the parameters. We ask, "Who makes robots" and we get "KIDS MAKES THE ROBOTS." We ask, "Who has a child" and we get "WOMEN HAS THE CHILD". We need to upload and release the code which achieves the objective, albeit primitively, and we must not code the same version further lest we wreck or corrupt the new functionality of answering who-queries in the form of "who" plus verb plus direct object. As we debug future releases of our code, the version remains safe and intact.

To Do

[ . ] Stop having verbs like ДЕЛАТЬ as the default Russian verb in the RuVerbGen module. Instead, use verbs like ГОВОРИТЬ as the default and treat verbs like ДЕЛАТЬ and like ТРЕБОВАТЬ as special paradigms to be recognized from their stems.

[ . ] For special Ghost Webserver AI ontologies, create little ontological files of special information.txt written in a format digestible by the latest Ghost AI software, so that the Ghost needs only to read each file in order to have knowledge about each subject.

[ . ] Make the Perl Ghost AI able to go out and surf the Web in search of information.

[ . ] Make the Ghost AI able to send and receive e-mail messages.

[ . ] Give the Ghost AI a Web presence where Netizens may interact on-line with the Perl AI Mind.

[ . ] Install the Ghost AI in a humanoid robot.

[ . ] Make the Ghost AI able to send an exact copy of its software and its memories from one webserver to one or more other webservers.

[ . ] Create graphic displays of the Ghost AI casually thinking or reacting to human input with an AI brainstorm. Make thinking visible.

1. MindGrid as Theater of Neuronal Activations

Influences upon AGI neuronal activation include:

Books for creating Perl AI:
Learning Perl, by Randal L. Schwartz and Tom Christiansen
Perl Black Book, by Steven Holzner
Perl by Example, Fifth Edition, by Ellie Quigley
Programming Perl, by Larry Wall, Tom Christiansen, and Jon Orwant
[Perl AI enthusiasts should write books on "AI in Perl".]

Page created: 12 April 2015
Return to top; or to
Perl6 AI User Manual or
Artificial Intelligence: Law and Policy
Mentifex asks White House Deputy Technology Chief Ed Felten to point out technology reporter
John Markoff of the New York Times at the Artificial Intelligence: Law and Policy workshop.
Many thanks to NeoCities.

Website Counter