Perlmind Programming Journal (PMPJ)

[NEWS] The Perlmind Programming Journal (PMPJ) is both a tool in developing Perlmind open-source artificial intelligence (AI) and an archival record of the history of how the Perlmind AI evolved over time.

Sun.12.APR.2015 -- Mentifex AI moves into Perl.

Since the Mentifex AI Minds are in need of a major algorithmic revision, it makes sense to reconstitute the Mentifex Strong AI in a new programming environment, namely Perl, beyond the original Mentifex encampments first in REXX (1993-1994), then in Forth (1998-present) and finally in JavaScript (2001-present). With Perl, we remain in a scripting language, but a language more modern and more prevalent than Forth. We savor the prospect of ensconcing our Perl mind-modules within the prestigious and Comprehensive Perl Archive Network (CPAN), where we already proposed some AI nomenclature a dozen years ago. With Perl we open up the mind-boggling and Mind-propagating vistas of seeding the noosphere with explosively metastatic and metempsychotic Perl AI that can transfer its files and its autopoiesis instantaneously across and beyond the vastness of the World Wide Web.

Sun.12.APR.2015 -- Downloading the Perl Language

Next we need to do a Google search-and-deploy mission for obtaining a viable version of the Perl language for our Acer Aspire One netbook running Windows XP home edition.

Ooh, sweet! When we search for "download Perl" on Google, we are immediately directed to which presents to us a choice among the Unix/Linux, MAC OS X, and Windows operating systems. Although we wish we were on 64-bit Linux so that we could be listed in a GNU/Linux AI website, we had better choose between ActiveState Perl and Strawberry Perl for our current Windows XP platform. Let's click on the link for Download Strawberry Perl because it is a 100% Open Source Perl for Windows without the need for binary packages. recommends that we use the "latest stable version, currently 5.20.2." and Strawberry Perl (32 bit) is offered to us. When we first click on the download, a Security Warning asks is whether we want to run or save this 68.6MB file. We click to save the file on our Acer Aspire One netbook. Huh? Almost instantaneously, after we see that the target will be our Acer C-drive, we get a pop-up window that says that we have completed a download not of 68.6 megabytes, but that we have downloaded 116KB in one second to C:\strawberry-perl- and we may now click on "Run" or "Open Folder" or "Close". Let us click on "Run" to see what happens. Now we get another Security Warning that "The publisher could not be verified. Are you sure you want to run this software?" Its name is "strawberry-per- msi" and we can click on "Run" or "Don't Run". Let's click on "Run". It starts to show a green download transfer, but suddenly it stops and a "Windows Installer" message says, "This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package." So we go back to where we had the choice between "Run" and "Save" and this time we click "Run" instead of "Save." In a space of between two and three minutes, the package downloads into a "temporary folder." Then a Security Warning says, "The publisher could not be verified. Are you sure you want to run this software?" Let's click "Run." Now it says "preparing to install" and "wait for the set-up wizard." Finally it says "The Setup Wizard will install Strawberry Perl on your computer. Click Next to continue or Cancel to exit Setup." Well, I have a complaint. Why did the process not work when I tried to "Save" the download instead of merely "Running" it for what I was afraid would be one single time? Why is the process of installing Perl so obfuscated and so counter-intuitive? Well anyway, let's click on "Next" and get with the program. Next we have to click the checkbox for "I accept the terms in the License Agreement." Now for a Destination Folder the Strawberry Perl Setup says to "Click Next to install to the default folder or click Change to choose another." C:\Strawberry\ is good enough for Mentifex here. Then we "Click Install to begin the installation." Oops. "Error reading from file C:\Documents and Settings\Arthur\Local Settings\Temporary Internet Files\ Content.IR5\R6BYZW40\strawberry-perl-[1].msi. Verify that the file exists and that you can access it." Now we have ended prematurely because of an error. Then we went back again to the initial download process and we went with "Run" instead of "Save," and wonder of wonders, we were able to download Perl. We will "Click the Finish button to exit the Setup Wizard," and we will read the Release Notes and the README file" available from the start menu. Aha! Upon clicking the Windows XP "start" button, we proceed into "All Programs" through "Strawberry Perl" to Strawberry Perl README in a Notepad file on-screen.

Sun.12.APR.2015 -- Learning to program Perl Strong AI

Now we have to figure out how to run a program in Perl. We go to Learning Perl at says to check that you have Perl installed by entering
perl -v
and so we actually do
C:\Strawberry\perl -v and it works! It says "This is perl 5, version 20, subversion 2 (v5.20.2)" etc. Next with the MS-DOS make-directory "md" command we md perl_tests to create a "perl_tests" subdirectory.

Then we open the Notepad text editor and we create a file that we call not but rather because we want to start programming Perl artificial intelligence immediately. C:\Strawberry>perl /path/to/perl_tests/ is what we try to run. At first we get "No such file or directory" but when we changed directory and entered
C:\Strawberry\perl_tests>perl we saw:
hi and so we have run our first Perlmind AI program.

Sun.12.APR.2015 -- Perl Strong AI Resources

Table of Contents (TOC)

Perlmind Programming Journal (PMPJ) -- details for 2015 May 10:

Our next concern is how we will save auditory engrams into the @aud auditory memory array with not only the $pho phonemic character being saved, but with other elements horizontally saved into a line of data. In departure from the previous AI Minds in REXX, in Forth and in JavaScript, we wish to tighten up and simplify the number of items being saved beyond each character itself. The JavaScript AI saves auditory engrams in the following format.

krt pho act pov beg ctu audpsi
631. { 
632. I 0 # 1 0 701
Before we go about eliminating some of the legacy items from the flag-panel, first we have to learn how to save the flag-panel in Perl.

We have gotten the Perl program to display the contents of @aud auditory memory on a line-by-line basis with the following main-loop code.

while ($t < $cns) {  # PERL by Example (2015), p. 190 
  $age = $age + 1;   # Increment $age variable with each loop.
  print "\nMain loop cycle ", $age, "  \n";  # Display loop-count.
  sensorium(); # PERL by Example p. 350: () empty parameter list 
  think();     # PERL by Example p. 350: () empty parameter list 
  if ($age eq 999) { die "Perlmind dies when time = $t \n" }; # safety
  if ($t > 30) {
    do { 
      # print @aud;  # show contents of $aud array
      print $aud[$krt], "\n"; # Show each successive character in memory.
      $krt++;     # increment $krt
    } while ($krt < 10);  # show @aud array at all time-points  
  };  # outer braces
}  # End of main loop calling Strong AI subroutines 
Next we need to add such flag-panel items as the time-points and the activation-level.

Perlmind Programming Journal (PMPJ) -- details for 2015 May 13:

In the Perlmind as a third-generation ("3G") AI, we need to change the "AudMem" module in combination with the "speech" module so that they both handle a greatly simplified @aud auditory memory array. In the new "3G" @aud array, there should not be seven unwieldy and partly unnecessary flags in the flag-panel or "row" of the auditory array. Instead, there should be only flags which have a legitimate basis in the neuronal nature of the auditory memory. There should be the phoneme itself, its quasi-neuronal activation-level, and its conceptual associative tag.

Perlmind Programming Journal (PMPJ) -- details for 2015 May 14:

Now we must cause the speech() module to display each word horizontally and with nothing but the phonemic character showing. How do we read out only one element, the $pho, from each row in the @aud auditory array?

Perlmind Programming Journal (PMPJ) -- details for 2015 May 17:

We need to experiment with "slice" in Perl so that we may break down an engram-row in @aud memory and retrieve individual elements from any row in the @aud array.

Perlmind Programming Journal (PMPJ) -- details for 2015 May 28:

The next module we need to implement is NewConcept, because the AI Mind can not begin to think without making associations among concepts and ideas. In the early AI Steps, all concepts are new concepts.

We need to expand the TabulaRasa routine to include the @psi conceptual array and the @en English lexical array. We do so, and the AI still runs.

In MindForth, NewConcept is called from the AudInput module, which will also call OldConcept when we have coded OldConcept. Now we stub in the NewConcept module, and the AI still runs. Next we add some temporary code to show that new entries are being made in the @psi conceptual array.

Perlmind Programming Journal (PMPJ) -- details for 2015 June 01:

Now that NewConcept has stored data in the @psi conceptual array, next we need EnVocab to be called by the NewConcept module to store data in the @en array for English vocabulary. We will need some temporary, non-permanent code in the AI Perlmind to display data present in the @en array along with the @psi array and the @aud array.

We start using the $nen variable as the "number of the English word." Then we discover that we need to reverse the order of some calls and have AudInput() call NewConcept() first and AudMem() later, so that any new concept is created before its engrams are stored in memory. is a major resource which we are not yet ready to use, but which may be of enormous value later in the further development of AI Perlminds.

Perlmind Programming Journal (PMPJ) -- details for 2015 June 04:

Next we need to stub in the EnParser (English Parser) module, not so much for its own sake, but rather as a bridge to calling the InStantiate module. We have EnParser merely announce that it has been called by the NewConcept module, so that we may see that the AI still runs with no apparent problems. We declare the $bias and $pov variables associated with EnParser so as to make sure that they are not reserved words in Perl. We clean up some lines of code previously commented out but left intact for at least one archival release iteration. We caution here that the EnParser module is extremely primitive and relies upon very strict input formats such as subject-verb-object (SVO), so that EnParser can expect a subject-noun, then a verb, and then an object-noun or a predicate nominative. We are more interested in the demonstration of thinking than in the demonstration of parsing. Perl AI coders may be able to adapt pre-existing CPAN or other parsing code for use with the AI Perlmind.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 10:

After many years of development, Perl6 has finally been released around the beginning of this new year 2016. We now position the emerging AI Perlmind as a killer app for the emerging Perl6 programming language. Yesterday we uploaded the Perl6 AI Manual to the Web for use with both P5 AI and P6 AI.

Apparently both Perl5 and Perl6 will have problems in accepting each single keystroke of input from a human user. Therefore we should shift our AI input target away from immediate human keyboard entry and towards the opening and reading of computer files by the AI Mind. Since we envision that a P6AI will sit quietly on a webserver and ingest both local and remote computer files, it makes sense now to channel input into the AI as a file rather than as dynamic keyboard entry.

Today we have created C:\Strawberry\perl_tests\input.txt as a textfile containing simply "boys play games john is a boy" as its only content. Then we have copied the code-sequence of AudInput() as FileInput() and we have made the necessary changes to accept input from an input.txt file instead of from the keyboard.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 11:

Today we need to figure out how to read in each line of input.txt and how to transfer each English word into quasi-auditory memory.

In the FileInput() subroutine of the source, it looks as though the WHILE loop for reading a file may be running through completely before any individual line of input is extracted for AI processing. We move the NewConcept() and AudMem() calls into the WHILE loop so that each line of input is processed separately. However, not just each line, but each word within a line, needs to be processed separately.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 12:

A line of text input needs to be broken up into individual words. First we learn from the PERL Black Book, page 568, that the getc function lets us fetch each single character in a line from our input.txt file. Therefore in the FileInput() module of we use the "#" symbol to comment out the WHILE-loop that was transferring a whole message "$msg" into AudMem(). Then we use getc in a new WHILE-loop to transfer a series of incoming characters from input.txt into AudMem(), where we comment out the string-reversing and chopping code and we convert a do-loop into a simple series of non-looping instructions, because the looping is being done up in the FileInput() module. We see that the program is now transferring individual input characters into auditory memory. Later we will need to make the transfers stop at the end of each input word, shown by a blank space or punctuation or some other indicator. The new code is messy, but we should upload it to the Web and clean it up when we continue programming.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 13:

In the FileInput() module of the AI we are inserting the call to NewConcept() so that AudMem() will show an incrementing concept number for each word being stored in auditory memory. Uh-oh, running the AI shows that each stored character is getting its own concept number. Obviously, we will have to call NewConcept() only when an entire new word is being stored, not each individual character.

We were able to test for a blank space (probably not enough) after an input word in FileInput(), then order a "return" out of the WHILE-loop. We had to put "{ return }" in brackets to avoid crashing the program. Now the AI loads a first word "boys" over and over into auditory memory, but we have made progress.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 14:

Let us see what happens if we run the Perl AI with no input.txt available for the AI to read. We save input.txt elsewhere and then we delete input.txt from the perl_tests directory. We run the AI program without an input.txt file available, and it goes into an infinite loop. We change the FileInput() code that opens the input.txt file by adding the "or die" function to halt the program and issue an error message. It works and we no longer get an infinite loop. Then we add the input.txt file back into the directory.

Now we need to work on getting the AI to store the first word of input and to move on to each succeeding word of input.

When we inspect the MindForth code, we see that the AudInput module first calls OldConcept at the end of a word, and only calls NewConcept if the incoming word is not recognized as an old concept. So we should create an OldConcept() module in the Perl AI program.

In the FileInput() module, we might just wait for a blank space-character and use it to initiate the saving of the word and the calling of both OldConcept and NewConcept(). Even if everything pauses to store the word and either recognize it or create a new concept, the reading of the input file should simply resume and there should be no special need to keep track of the position in the input-line.

In accordance with the MindForth code, any non-space character coming in should go into AudMem(). An ASCII-32 space character does not get stored, but rather a storage-space of one time-point gets skipped, because MindForth AudInput increments time "t" for all non-zero chararacters coming in. In other words, skipping one time-point in auditory memory makes it look as if a space-character were being stored.

It turns out that time "$t" was not yet being incremented in the AI, so we put an autoincrement into the FileInput() module.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 15:

It is time in to create the AudBuffer() module to be called by AudInput() or FileInput() and VerbGen(). The primitive coding may be subject to criticism, since the module treats a series of variables as a storage array, but the albeit primitive code not only serves its purpose but is easily understandable by the AI coder or system maintainer. For now we merely insert a stub of the AudBuffer() module.

After wondering where to place the AudBuffer() module, today we re-arrange all the mind-modules to be in the same sequence as MindForth has them, so that it will be easier in inspecting code to move among the Forth and JavaScript and Perl AI programs. MindForth compels a certain sequence because a module in a Forth program can call only modules higher up in the code.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 16:

The program is going to get extremely serious and extremely complicated now, because for the first time in about eighteen years we are going to change the format of the storage of quasi-acoustic engrams in auditory memory. We are going to change the six auditory panel-flags from "pho act pov beg ctu audpsi" down to a group of only three: "pho" for the phoneme or character of auditory input; "act" for the activation-level; and "audpsi" for the concept number in the @psi conceptual memory array.

The point-of-view "pov" variable will no longer be stored in auditory memory, and instead other functions of memory will have to remember, if possible, who generated a sentence or a thought stored in auditory memory. Over the years it has been helpful to inspect the auditory memory array and to see whether a sentence came from the AI itself or from an external source.

The flag-variables "beg" for beginning of a word and "ctu" for continuation of a word served a purpose in the early AI Minds but are now ready for extinction. The Perl language is so powerful that it should simply detect the beginning or ending of a word without relying on superfluous flags stored in the engram itself. Removing obsolete flags makes the code easier to understand and easier to develop further.

We should probably next code the EnVocab() module for storing the fetch-tags of English vocabulary, because the @psi concept array will need to direct pointers into the @en array. In MindForth, EnVocab comes in between InStantiate for "psi" concepts and EnParser for English parts of speech. Oh, we already have a stub of EnVocab(). Then it is time to flesh out the module.

First we create the number-flag $num for grammatical number, which is important for the retrieval of a stored word in English or German or Russian. Then we create the masculine-feminine-neuter flag mfn for tracking the gender of a word in the @en English array.

We may now be able to discontinue the use of the fex flag for "fiber-out" and fin for "fiber-in". These flags were helpful for interpreting pronouns like "I" and "me" as referring to the AI itself or to an external person. The Perlmind should be able to use point-of-view "pov" code to catch pronouns or verb-forms that need routing to the correct concept.

We still need a part-of-speech pos flag to keep track of words in the @en array. We also need the $aud flag as an auditory recall-tag for activating engrams in the @aud array, unless it conflicts with the @aud designation and needs to be replaced with something like $rv for recall-vector.

The $nen flag is already incremented in NewConcept(), and now we begin storing $nen during the operation of EnVocab(). Then we had many problems because in TabulaRasa() we had filled the @en English array with zeroes instead of blank spaces.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 17:

In the program we continue working on EnVocab() for English vocabulary being stored in the @en array. Today we create the variable $audbeg for auditory beginning of an auditory word-engram stored in the @aud array. We also create the variable $audnew to hold onto the value of a recall-vector onset-tag for the start of a word in memory while the rest of the word is still coming in. By setting the $audnew flag only if it is at zero, we keep the flag from changing its truly original value until the whole word has been stored and the $audnew value has been reset to zero for the sake of the next word coming in.

Today for a bug in the AI we kept getting a message something like, "Use of unitialized value in concatenation <.> or string at line 295" at a point where we were trying to show the contents of a row in the @en English lexical array. In TabulaRasa() we solved the bug by declaring $en[$trc] = "0,0,0,0,0,0,0"; with seven flags set to zero. Apparently TabulaRasa() initializes all the items in the array.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 18:

In the AI Perlmind, let us see what happens at each stage of reading an input.txt file.

The MainLoop calls sensorium() which in turn calls the FileInput() module. FileInput() goes into a WHILE-loop of reading with getc (get character) for as long as the resulting $char remains defined. As each character comes in, FileInput() calls AudMem() to store the character in auditory memory. Each time that $char becomes an empty non-letter at the end of an input word, FileInput() increments the $onset flag from $audnew and calls NewConcept(), because the AI must learn each new word as a new concept.

NewConcept() increments the number-of-English $nen lexical identifier and calls the English vocabulary EnVocab() module to set up a row of data in the @en array. NewConcept() calls the stub of the English parser EnParser() module. FileInput() calls the stub of the OldConcept() module.

The MainLoop module calls the Think() module which calls Speech() to output a word as if it were a thought, but the AI has not yet quickened and so the AI is not yet truly thinking. At the end of the program, the MainLoop displays the contents of the experiential memory for the sake of troubleshooting the AI.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 19:

The program is ready to instantiate the InStantiate() module for creating concepts in the @psi array of the artificial Mind. Let us change the @psi array into the @psy array so that a $psi variable will not conflict with the name of the conceptual array.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 20:

With we may need to remove the activation-flag from the flag-panel of the @en English lexical array. In the previous Forth and JavaScript AI Minds, we had "act" there in case we needed it. Now it seems that in MindForth only the KbSearch module uses "act" in the English array, and the module could probably use @psy for searches instead of the @en lexicon.

There is some question whether part-of-speech $pos should be in the @psy conceptual array or in the @en lexical array. A search for "6 en{" in the MindForth code of 24 July 2014 reveals that no use seems to be made of part-of-speech "pos" in MindForth. Apparently part-of-speech has already been dealt with during the functions that use the Psi array, and therefore the English array does not concern itself with part-of-speech. So part-of-speech could be dropped from the @en English array.

It looks as though part-of-speech has to be assigned in the @psy array before inflections are fetched in a lexical array. If a person says, "I house you in a tent," then a word that is normally a noun becomes a verb, "to house." The software should override any knowledge of "house" as being a noun and store the specific, one-time usage of "house" as a verb. Then the AI robot can respond with "house" as a verb to suit the occasion: "Please house me in a shed." OldConcept() should not automatically insist that a known word always has a particular part-of-speech. In a German AI, VerbGen() should be called to create verb-endings as needed, if not already stored in auditory memory.

In the @psy concept array we should have seven flags: psi, act, pos, jux, pre, tkb, and seq. If we now change the tqv variable from MindForth to $tkb in the Perl AI, it clearly becomes "time-in-knowledge-base" for Perl coders and AI maintainers.

It suddenly dawns on us that we no longer need an enx flag in the @psy array. We may still need the $enx variable for passing a fetch-value, but it looks like the @psy concept number and the @en lexical number will always be the same, since we coded MindForth to find inflections for an unchanging concept number.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 21:

Now invites us to make a drastic simplification by merging the @psy array and the @en array, because any distinction between the two arrays has gradually become redundant. The @psy array has psi, act, pos, jux, pre, tkb, seq flags. The @en array has nen, num, mfn, dba, rv flags. We could joint them together into one @psy conceptual array with psi, act, pos, jux, pre, tkb, seq, num, mfn, dba, rv flags.

The first thing we do is in TabulaRasa(), where we fill each row of the @psy array with eleven zeroes for the eleven flags. Next we have the InStantiate() module store all eleven flags in the combined flag-panel. We run the Perl AI and it makes no objections. Then we have InStantiate() announce the values of all eleven flags before storing them.

In the flag-panel of the @psy array, we should probably add a human-language-code "hlc" so that an AI can detect English or German or Russian and think in the indicated language.

Perlmind Programming Journal (PMPJ) -- details for 2016 January 22:

In where we have merged the @en array into the @psy conceptual array, we gradually need to eliminate the $nen variable. However, we need a replacement other than the $psi variable so that the replacement variable can hold steady and wait for each new word being learned in English, German, Russian or whatever human language is involved. Let us try using $nxt as the next-word-to-be-learned.

2016 January 23:

In we are now trying to code the AudRecog() module taken from MindForth, although the timing may be premature.

As we began coding AudRecog() in the AI, we discovered that the primitive EnBoot() sequence did not contain enough English words to serve as comparands with a word being processed in the AudRecog() module, so we must suspend the AudRecog() coding and fill up the EnBoot sequence properly before we resume coding AudRecog().

Today we rename the English bootstrap EnBoot() sequence as MindBoot() because the Perl AI with Unicode will not be limited to thinking only in English, but will eventually be able to think also in German and in Russian.

2016 January 24:

In we are replacing the Think() module with EnThink() for English thinking, and we are declaring DeThink() as a future German thinking module and RuThink() as a future Russian thinking module.

Coding the AudRecog() module in Perl5, we move left-to-right through the nested if-clauses. At the surface we test for a matching $pho. Nested down one layer, we test for zero activation on the matching $pho, because we do not want a match-in-progress. At the second depth of nesting, we test for the onset-character of a word. In the previous AI Minds coded in Forth and JavaScript, we still had the "beg(inning)" flag to fasten upon a beginning character at the start of a comparand word in auditory memory. Now in Perl killer app AI we must rely on the $audnew variable which is set during FileInput() but which we have apparently neglected to reset to zero again. Let us try setting $audnew back to zero just before we close the audinput.txt file. Oh no, $audnew won't work here, because $audnew applies only to the beginning of an input word, not to the beginning of a word stored in memory. Maybe we can try testing for not only a zero-activation matching $pho but also for an adjacent blank space.

Now, we are going backwards in memory from space-time $spt down to $midway, which is set to zero in the primitive AI. The $i variable is being decremented at each step backwards. We would like to know if going one step further encounters the space before a word. We might have to start searching forwards through memory if we want to trap the occurrence of an initial character in a stored word. If we go forwards through memory, we could have a $penult variable that would always hold the value of each preceding moment in time. For the chain of activations resulting in recognition, it should not matter if the sweep goes backwards or forwards.

2016 January 25:

Now in we will stop searching backwards in AudRecog() and search forwards so that it will be easier to find the beginning of a comparand word stored in auditory memory.

As we debug the AI, we notice that the MindBoot sequence is not a subroutine, as EnBoot was in the previous AI Minds. We should call MindBoot() as a one-time subroutine from the MainLoop. We establish TabulaRasa() and MindBoot() as subroutines and we give them a one-time call from the MainLoop.

Throughout many tests we were puzzled because AudRecog() was not recognizing an initial "b" at zero activation preceded by a zero $penult string. Finally it dawned on us that the MindBoot() "BOY" was in uppercase, so for a test we switched to lowercase "boy", and suddenly the proper recognition of the initial character "b" was made. But we will need to make input characters go into uppercase, so that AudRecog() will not have to make distinctions.

2016 January 26:

Moving into, we need to consult our Perl reference books for how to shift input words into UPPERCASE. The index of Perl by Example has no entry for "uppercase". None also for "lowercase". However, the index of the Perl Black Book says, "Uppercase, 341-342," BINGO! Mr. Steve Holzner explains the "uc" function quite well on page 341. Let us turn the page and see if we need any more info. Gee, page 342 says that you can use "ucfirst" to capitalize only the first character in a string sentence -- one more example of how powerful Perl is. Resident webserver superintelligence, here we come.

Now let us try to use the "uc" function in the free Perl AI source code of as we continue. We had better look into the FileInput() module first. Hmm, let us go back to the index of Perl by Example, where in the index we find "uc function, 702." Okay, let us try using "uc $char" at the start of the input WHILE-loop in the FileInput() module. Huh? It did not work. Uh-oh. Houston, we have a problem. Our mission-critical Perl Supermind is stuck in lowercase. Here we have been trying to learn Perl, but we have never coded any Perl program other than artificial intelligence. Even our very first "Hello world" Perl program was a "" program and we never did any scratch-pad Perl coding. Meanwhile there are legions of Perl coders waiting for us to finish the port of AI Minds first into Perl5 and then into Perl6. Let us check the Perl Black Book again. Let us try, $char = "uc . $char"; in the FileInput() module. We drag-and-drop the line of code from this journal entry straight into the AI code. Then we issue the MS-DOS command, "perl" and take a look. Oh no, the "uc" itself is going into memory as if it were the input. Hey, it finally worked when we used $char = uc $char; as the line of code. Now the contents of auditory memory are being displayed in uppercase. We can go back to coding the AudRecog() module.

2016 January 27:

Although we have done away with the ctu-flag of MindForth in the Perl AI, because we want to reduce the number of flags stored in the @aud auditory memory, in AudRecog() we may create a non-engram "ctu" or its equivalent by using the split function to look ahead one array-row and see whether a stored comparand word continues beyond any given character.

2016 January 28:

In we would like to have the FileInput() module call the human-computer-interaction AudInput() module if the input.txt file is not found. In that way, we can simply remove input.txt to have a coding session of direct human interaction with the AI Perlmind.

2016 January 29:

In we are continuing to improve AudInput() towards equal functionality as we developed in FileInput().

The line of input goes into a $msg string, which AudInput() needs to process in the same way as FileInput() processes the input.txt file, except that AudInput() only has to deal with one line at a time, which is presumably one sentence or one thought at a time.

2016 January 30:

Today in we hope to fix a problem that we noticed yesterday after we uploaded to the Web. We had carefully gone about sending the input $pho (phoneme) into AudMem() and AudRecog(), but no word was being recognized in AudRecog() -- which we had coded for six hours straight three days ago. Then yesterday we saw that we had left a "Temporary test for audrec" in the AudMem() module and that the code was arbitrarily changing the $audpsi from any recognized $audrec concept to the $nxt (next) concept about to be named in the NewConcept() module. Now we will comment out that pesky test code and see if AudRecog() can recognize a word. Hmm, commenting out the code did not seem to work.

We hate to debug the pristine Perl AudRecog() by inserting diagnostic message triggers into it, but we start doing so, and pretty soon we discover that we neglected to begin AudRecog() with the activation-carrier $act set to eight (8), as it is in the predecessor Mindforth AI. So let us set $act to eight in the AudRecog() Perl code and see what happens. Uh-oh, it still does not work.

But gradually we got AudRecog() to work. Now in the AI we are working on the AudMem() module. We want it to store each $pho phoneme in the @ear array as $audpsi if there has been an auditory recognition, and as simply $nxt if only the next word from NewConcept() is being stored.

The $audpsi shall be stored if the next time-point is caught by the ($nxr[0] !~ /[A-Z]/) test as not being a character of the alphabet.

2016 January 31:

Yesterday we got the Perl AI to either recognize a known word and store it in the @ear array with the correct $audpsi tag, or instead to store a word as a new concept with the $nxt identifier tag. However, in the @psy conceptual array, the Perlmind is improperly incrementing the $nxt tag because we have not yet figured out how to declare that a character flowing by is the last character in a word. Bulbflash: Maybe we can store the $nxt tag at the end of each @ear row, erasing it when each successive character comes in, so that only the last letter of the word will end up having the $nxt tag.

2016 February 01:

Yesterday we uploaded the January 2016 PMPJ material to the Cyborg weblog as a blog-post. Today in we are continuing to port the Strong AI Minds from Forth and MSIE JavaScript first into Perl5 and then hopefully into the new Perl6.

When we type a multiword sentence into the primitive Perl AI, we notice that only the first word is currently being stored in the @ear auditory memory array. We type in "boys play games" but we see only the word "BOYS" in the memory array. Obviously something needs to be improved in the AudInput() module for auditory input, where we see a do-while loop that stops when it encounters a blank space in the user input. We need to process the entire sentence of user input.

2016 February 02:

In yesterday we made it possible for an outer loop in the AudInput() module to read in an entire sentence of user input while the inner loop deals with individual words in the sentence by sending each character into the AudMem() module. AudMem() sends each word character by character into the AudRecog() module for auditory recognition of known concepts or for creation of a new concept if an input word is not recognized by the AI.

Now we need to straighten out the proper assignment of concept-tags in audition for both known and previously unknown concepts (words). When we run the AI, we see that it takes in one whole sentence but only the first word of each additional sentence. We suspect that the problem is that we have not zeroed out the $eot variable after the outer loop of AudInput(). We set $eot back to zero and see what happens. The AI starts taking in one whole sentence after another, with the result that we run out of auditory memory to hold all the sentences. We also noticed that the AI was not properly incrementing the $nxt next-new-word variable for each unknown word in a sentence.

First we need to look and see where the $nxt counter supposedly gets incremented. Hmm, the free AI source code uses $nxt++ only in the NewConcept() module. Why does it not happen for the next new concept after each previous new concept? We put a diagnostic message into the NewConcept() module, run the Perl killer app program, and the message does not even appear, so maybe NewConcept() is not being called.

AudInput() is supposed to call the NewConcept() module. Oh, but the code to call NewConcept() has been left out of the inner loop for single words, such as previously unknown words (new concepts), and so the code is not doing anything immediately after the outer AudInput loop. Let us put the NewConcept-calling code back into the inner loop. We do so, and now $nxt is being incremented too rapidly. Oh, we put the NewConcept-calling code too deep inside the inner loop. The calling code must be inside the outer loop, but also outside of the inner loop. Let us make the change and see what happens. Well, the AI is no longer incrementing $nxt too rapidly, but there is still a problem. The $nxt variable is being incremented only once for each new word, but the resulting concept-number is not being properly assigned at the end of each new word in auditory memory.

In the AudRecog() module, we should try to get away from using the $penult flag to detect an engram preceded by a blank space. Let us try using $prv to split up the @ear row and look at the preceding engram.

2016 February 03:

We have been working fruitlessly on the AudRecog() module and now we suspect that no activation is being imposed inside the flag-panel of the @ear auditory memory array. Therefore we shall take our most recently uploaded and save it as for further work.

After some experimentation we are learning how to get AudRecog() to impose activation on an engram in the @ear auditory array. We do not want these activations to persist over time, so now we create the AudDamp() module derived from the Mindforth AI.

2016 February 04:

In we continue to troubleshoot the AudRecog() module. Today we are trying to make sure that the module creates chains of non-initial activation for the achievement of conceptual recognition at the end of any unbroken chain. When we type in "boy sees all", we notice among the diagnostic messages that the AI is claiming to increase non-initial activation for the phoneme "O" at both t=4 amid "ERROR" and at t=14 amid "BOY". The "O" in "ERROR" should not be receiving any activation. When we investigate, we say that the diagnostic message is a misnomer, and we change it to a question asking if it is time to "INCREASE NON-INITIAL ACTIVATION?"

Then we have a problem because AudRecog() seems to recognize the input word "BOY" quite well at the t=19 time-spot where "BOY" comes to an end in auditory memory, but the AudMem() module continues to the t=20 time-spot and diagnostically claims to be storing concept number "589" (for "BOY") a second time. Oh, Mindforth stores the "audpsi" and then immediately zeroes out the "audpsi". Let us try the same thing in the Perl AI AudMem() module. It does not work; it somehow prevents the basic recognition of each word.

2016 February 05:

Today in we are concerned with setting the $audpsi tag after a word of input has come in. We have renamed the program as for several reasons. Although is indeed an AI Mind, we want there to be many Perlmind instantiations, of which is just an early one, struggling with all the other Perlminds for survival of the fittest. We also want the user interface to be a dialog between "Human" and "Ghost", not between the "Human" and "Robot" of the MindForth AI. A Perl AI running on a webserver is likely to resemble "the ghost in the machine" more than a robot.

2016 February 06 -- A Simple Bugfix

In the AI program today we attempt a minor bugfix -- a major event because up until now the primitive artificial intelligence in Perl was not robust enough to have any minor problems. The purported Perl killer app has only four words -- ERROR, A, ALL, BOY -- in its MindBoot sequence, because we want to see a non-scrolling, easily viewable screen of data when we run the AI to its Perl-die end for the sake of troubleshooting. Recently we improved the AudMem() module so that it stores engrams well in auditory memory, but we notice a bug in the process of auditory recall. When we type in, "A boy sees all," the Ghost program assigns the following concept-numbers: A [101] BOY [589] SEES [101] ALL [123]. It is incorrect for the AI to assign "101" to the unknown word "SEES". Let us try to understand and fix whatever went wrong.

As we inspect the diagnostic messages during the input, we observe that a false value of "101" for both "provisional recognition" $prc and $monopsi are being carried over, and $audrec for "auditory recognition" is holding the same "101" value which belongs to "A" and not the unknown input-word "SEES", which should be receiving a $nxt new-concept value of "900". These erroneous values were probably being assigned during the earlier processing of the article "A" at the start of the input sentence. Certain flags were not being zeroed out properly when they should have been.

In the auditory recognition AudRecog() module, we see that $monopsi gets its value from $prc, so we should look into the provisional recognition $prc as potentially the source of the problem. In AudRecog(), $prc gets its value from the "$aud[2]" concept-number of any single-sharacter word coming in, such as "A" in this case or "I" in other situations. The current AudRecog() code does not seem to zero out the $prc variable anywhere. Let us see if the ancestral Mindforth code zeroes out prc anywhere. Hmm, not even MindForth zeroes out prc. Let us see if the JavaScript AiMind.html zeroes out prc. Well, the JSAI zeroes out prc in both AudMem() and AudInput(), so the Ghost AI in Perl should get with the program. But we should not zero out $prc while program-flow is still in AudRecog(). No, the $prc variable is apparently meant to carry its value over into AudMem(), where it can at least provide a word-stem, especially for a highly inflected language like Russian. Since we are not yet using $prc in the Perl AudMem() module, let us zero out $prc in the AudInput() module. Now let us run the Ghost AI again and see what spooky things occur. We enter "A boy sees all" and we get the following correct concept numbers: A [101] BOY [589] SEES [900] ALL [123]. The unknown word "SEES" obtains the concept number "900" from the $nxt variable which is set to "900" at the end of the MindBoot() sequence and which will be incremented for each new word being treated as a new concept by the Ghost AI -- the ghost in the machine.

Unfortunately, there is still a bug. We type in a sentence and we get the following concept numbers:
I [900] SEE [901] A [902] BOY [589]. The "A [902]" assignment is not correct. Who're you gonna call? Let us call in some Perl AI ghostbusters.

Hmm, let us run the previous with the same input. We do, and we encounter the same glitch. After long troubleshooting, we discover that the $audrun variable needed to be reset to zero after each inner loop in the AudInput() module. Doing so fixed the problem of only getting one recognition of the "A" monopsi, but there is still a problem with $monopsi being set prematurely in AudRecog(), probably for lack of testing to see if the next space is a non-character.

2016 February 07:

Today in we may go back to searching backwards through auditory @ear memory in the auditory recognition AudRecog() module, because for lack of massive parallelism (MasPar) we will want to accept the first and most recent recognition of a word. We have looked back over the Perl Mind Programming Journal (PMPJ) to find that we were making the change to forward searches around 24 January 2016. From we took the code "for (my $i=$spt; $i>$midway; $i--) {" and we inserted it into the AudRecog() module, where we commented out the line of code for forward searches. Then we ran the Ghost AI program and it worked perfectly well, even with the search changing from forwards to backwards. Apparently, our use of the "split" function to look at previous and next rows of memory in the @ear array made it not matter whether we were searching forwards or backwards. However, now that we are searching backwards again, we may have to use the Perl word "last" to exit from the loop of the search when we have found the most recent recognition of an old concept.

The AI does not really find the most recent recognition of a known word until the search centers upon the very last character of the input word. If we have an eight-letter word coming in, the entire @ear array has to be searched for the first seven characters. We would only want to leave (as in Forth) or break (as in JavaScript) from the search-loop when the most recent result is all we need. Furthermore, as the AI ghosts get more sophisticated, we may want the found results to be as close in memory as possible to a conversation going on in real time. We might not merely be recognizing the word; we might also be reactivating the ideas using the word at the time of its retention in memory.

Now in we will try to increase the MindBoot sequence.

2016 February 09:

Four days ago we finished debugging the AudRecog() module, and yesterday we completed the transfer of the English bootstrap from the JavaScript AiMind.html into the MindBoot() sequence of the Webserver Ghost AI. Now we notice that a new word does not show a recall-vector "$rv" tag in the Tutorial display, so we should troubleshoot the problem.

2016 February 10:

Today with we want to move beyond the completion of the MindBoot() sequence on a par with the JavaScript AiMind.html and we want to code the modules that comprehend human language input. When we insert a diagnostic message into the EnParser() module and run the Ghost AI, the message does not make any appearance. In the free AI source code, we see that EnParser() is called by NewConcept(), but it also needs to be called by OldConcept().

Because we have merged the lexical array of MindForth into the conceptual @psy array of the Perl AI, we may have to bypass at first and then remove the EnVocab() module from the AI Perlmind. The Perl AI will have to use the InStantiate() module to store the linguistic flags for English words.

The OldConcept() module now searches backwards in time to find the most recent engram of an $oldpsi concept. Then we use the Perl last function to exit from the search-loop in order to report the found data for the InStantiate() module.

2016 February 11:

In we would typically start coding by asking what is the most crying need, but nothing shouts out right now for debugging or coding. We perhaps ought to take care of the remaining associative tags in the @psy conceptual array, but we would like to code something more daring. Let us try to code some parts of the thinking machinery.

2016 February 12 -- Rudimentary NounPhrase() Code

As part of the thinking machinery in we will try now to fill in the NounPhrase() stub with some functioning code. In advance we know that we will have to add code to InStantiate() in order to impart functionality to the thought-generation modules.

We begin fleshing out NounPhrase() by having the module search for the most active concept in the @psy concept array. No such active concept is found, because first we have to establish code that lends initial activation to any concept talked about by a human user. Accordingly we assign a value to the $act variable in the EnParser() module and that same value gets stored with concepts in the InStantiate() module. In the NounPhrase() module we temporarily search for any concept with an activation higher than zero, and the module begins to report the finding of a $motjuste concept.

2016 February 14:

Today we are experimenting with UTF-8, because we want our Perl AI to think in Russian, along with English and German. We also want to make it easier for other Perl programmers to create AI Perlminds in other foreign languages beyond Russian, but using the Unicode in Perl that we use for Cyrillic characters in Russian.

With an otherwise abandoned version of the Perlmind, we rename the FileInput() module as RuFileInput() and we create an input.txt file with Russian characters in it. At first the AI goes into a seemingly infinite loop, and we have to press Control-C to stop it, which shuts down the MS-DOS window. We start the AI back up, but we insert some "die" breakpoints one after another in the new RuFileInput() module, so we can see what the AI does with Cyrillic input. It calls AudMem() and tries to store a character that is definitely not Cyrillic. We had better consult our various Perl books and webpage print-outs to learn how to "encode" the incoming Cyrillic characters from the input.txt file.

The program complained about an "undefined" subroutine &Main::encode until we put "use Encode" in the heading.

We started using open (my $fh, "<:utf8", "input.txt") on the Cyrillic input.txt file and now AudMem() was trying to store one recognizeable character followed by something unrecognizeable.

2016 February 15:

Yesterday we had a version of the Perlmind open a Cyrillic input.txt file and store each character through AudMem() in the @ear auditory memory, but the nonsense Russian characters being stored were not the same characters as we had Alt-Shift typed into our Russian input.txt file. Today we plan to try a few more diagnostic efforts, such as changing the Cyrillic input.txt to contain only one single Russian letter, typed twenty or thirty times. As we cycle through a few letters of the Russian alphabet, we may get a clue as to what the RuFileInput() module is doing with the supposedly Cyrillic input.

We also plan to store the Cyrillic input.txt file somewhere else and delete it from the Perl directory, so that the RuFileInput() module will first try to open the text file and then call RuAudInput() instead, so that we can work on entering Cyrillic directly through the keyboard. We know that in early versions of the Perlmind we were able to type in Russian and see it stored as Russian in the auditory memory. Our real aim is to get the Russian portion of the MindBoot() sequence working properly so that the AI can recognize Russian words and think in Russian.

When we make the Cyrillic input.txt file consist of only one particular Russian letter used several times, our AI stores only one character but the wrong one, and in concept-groupings consistent with the length of the quasi-words in the input file. Somehow the RuFileInput() module is not reading in the Cyrillic characters as we intended them to be read in.

We make some progress when we delete the Cyrillic input.txt file so as to make RuFileInput() give up and switch to RuAudInput(). There at first we use $msg = decode_utf8() and we do not get the correct Russian letters during AudMem() processing. When we switch to $msg = encode_utf8() suddenly the Russian characters show up loud and clear, although they are apparently not becoming $pho and they are not being stored in the @ear auditory memory.

By a fluke we have discovered that one experimental program shows us a code for any Russian letter entered by us, and our more advanced program, when we store the indicated code in the MindBoot(), renders the Russian letters without ancillary symbols as the "Ghost: " output.

2016 February 16 -- Using Perl Unicode to Display Russian

Yesterday we finally made the Perl AI program display Russian Cyrillic characters without a symbol to the left of them, but we do not yet understand how the Unicode UTF-8 works to display Russian characters. Today we hope to isolate and grok the code that works.

From the experimental version we have removed snippets of code and then tested to make sure that the AI still shows Russian letters without extraneous symbols. Eventually we get down to the EnThink() module, which we notice is calling the Speech() module twice in order to simulate thinking. During the first call, Speech() says "ERROR" in English, because no time point has been given for the start of speech and the word "ERROR" is found at the default beginning point. During subsequent calls from EnThink() to Speech, the time-point for some Russian test vocabulary is given, and so we see some Russian words. But the Perl program seems to need at least one encoding in the format of \N{U+0} to prevent the extra symbols from being displayed with the Cyrillic characters, so we use one such encoding in advance of our intended first Russian word.

2016 February 17 -- Apportioning Time for English, Russian and German in MindBoot

The Perl AI currently uses 651 time-points for the English boot sequence. The Russian Dushka AI uses 577 time-points. The German Wotan AI uses 1215 time-points. Perhaps in Perl we should encode the Russian bootstrap starting at t=1001 so as to leave space for expansion of the English bootstrap and so that the German bootstrap may start around t=2001 and go beyond t=3000.

2016 February 18:

Today we need to see first if AudMem() in the Perl AI properly stores Russian Cyrillic characters, and secondly if the as-is AudRecog() module can recognize Russian words, or if AudRecog() needs modification to handle Russian Cyrillic.

Oh gee, currently AudMem() does a lot if filtering to make sure that an auditory engram is in the [A-Z] range.

2016 February 19:

When we do a comment-out of # binmode STDIN, ":encoding(UTF-8)"; near the start of the AI Perlmind, we stop getting the message, utf8 "\x9F" does not map to Unicode at line 389. with the Alt-Shift keyboard input of the capital letter "A" in Russian. Instead we get the Russian letter with a subscript "T" symbol to the left of the Cyrillic "A".

We have already been using "reverse" on the input in the AudInput() module, in order to "chop" off each character of input from the first to the last for transfer to the AudMem() module. Now that Russian characters are coming in with a four-character coding like "\x9F" for capital Russian "YA", we try using the substr function to glom onto the four characters designating the one Russian character. However, our diagnostic messages inserted into the AudInput() module reveal that even the hex code has undergone the "reverse" function and "\x9F" has turned into "F9x\" as a value of the $pho(neme) variable. So we may have to "reverse" the $pho variable.

2016 February 20:

In our Russian AI progress, we have gotten Russian words stored in the Perl MindBoot() to appear on screen as Cyrillic characters, and we have gotten Russian from the Windows XP keyboard to go through AudMem() into auditory memory where it gets displayed in Cyrillic characters during a diagnostic read-out. Next we need to see if the Perl AudRecog() can recognize Russian words just like the Dushka JavaScript AudRecog().

We may try now to see if the AudInput() module can immediately detect Russian input and set the $hlc human language code to "ru" so that both AudInput() and AudMem() may treat English and Russian input differently.

Our Mentifex LinkedIn profile has been updated today to convey central facts about the Perl "Ghost" AI project.

Today we created thirty-three tests for Russian capital letters to set the human-language-code $hlc to "ru" for Russian. Then we got a brighter idea and we coded thirty-three new tests to not only convert lowercase to uppercase but also to set the human-language-code $hlc to "ru" for Russian.

Since we see diagnostically that our AudMem() module is now sending Russian characters into AudRecog() but not getting any $audpsi recognitions, we may need to create a RuAudRecog() version of AudRecog() and use it to recognize Russian words when the human-language-code $hlc is set to "ru" for Russian.

2016 February 21:

Today in, perl objects to our use of if ($prv[0] =~ /["\x**"]/) in the RuAudRecog() module, stating that the illegal hexadecimal digit "*" is being ignored by perl, when we try to use "*" as a wildcard. At first we go along with the mandates of perl, but we discover that RuAudRecog() stops recognizing a Russian word.

2016 March 03:

Now that with the basic MindBoot() sequence has been filled in for both English and Russian, it is time to code the actual Strong AI mechanisms taken from MindForth and the JavaScript AiMind.html program. We may hold off on coding the German portion of the AI for a while, because the Russian artificial intelligence is rather difficult to program.

2016 March 04:

Our main question in is how early the RuAudRecog() module will declare a provisional recognition ($prc), which we need to recognize the stem of a Russian verb. In the Dushka Russian AI, "prc" is potentially set in two places, first in the early code for really short words of one or two characters, and second in the general code for all word-recognitions.

We may be able to do our first significant AI coding after finishing the MindBoot() by making the Ghost AI able to recognize a Russian or English verb based on its stem. To do so, we have to make sure that the $prc variable for "provisional recognition" is working, and we have to tweak the memory-insertion apparatus to make sure that not only the final character of a stored word carries an $audpsi tag, but also the preceding characters which include the end of the word-stem.

Hmm, MindForth does not seem to have the audpsi grouping mechanism. Let us see if the German Wotan AI has it. Oh, the Wotan AudMem module does indeed have the prc-audpsi mechanism. And the German verb "verstehen" ("understand") has three audpsi numbers in a row in the Wotan bootstrap sequence.

2016 March 05:

Yesterday we made the AI able to recognize the stems of bootstrap Russian verbs by means of the $prc "provisional recognition" variable. We had to enhance the verbs in the bootstrap by making sure that there was a concept-number not only at the end of the word, but also next to each auditory engram from the final character of the verb-stem up through the end of the word. Today we need to make sure that new Russian verbs get stored by the Perl AI in the same format -- with concept-numbers stored from the end of the stem to the end of the verb. Let us get to work and then come back and report our success or failure.

Although the Dushka Russian AI in JavaScript uses the AudInput() module to attach concept-numbers to Russian verbs from the end of the stem to the end of the word, in the Perl AI we should try to achieve the same purpose in the RuAudMem() module. There we insert the necessary code and we begin to see both old and new Russian verbs recognized in their various forms other than the infinitive.

Now we will also stub in a Motorium() module and a Volition() module to make it clear that the Ghost Perl Webserver Strong AI will be able to set free-will goals for itself and operate a robot body to achieve those goals. We also had to stub in a module for physiological Emotion() to influence AI thinking.

2016 March 10:

For thinking in English with the AI, let us see if we can concatenate the $output string. But first we should change the Speech() module from its original form as of May of 2015. We should move through auditory @ear memory and obtain each $pho from the initial position in each auditory engram.

2016 March 12:

Today in we are implementing the setting of the $pre tag in the InStantiate() module. If a noun or pronoun comes in with a pos=5 or pos=7, then $prevtag is set to the $psi value for the next go-around, assuming that a verb will come in and acquire the $psi value as its $pre associand.

2016 March 13:

Yesterday we worked on setting the $pre flag when the InStantiate() module relates an incoming word to the previous concept associated with it, such as a noun or pronoun as the subject of a verb, or a verb as a concept in the $pre relationship to a direct object. Today in we would like to set the $seq tag for what verb follows a subject, or for what direct object follows a verb.

2016 March 18:

In the parser module, we use $tsj (time-of-subject) to go directly to the subject noun or pronoun and to "perl-split" it in order to have access to the entire @psy $tsj flag-panel, with such values as $pos and $dba etc.

In the special case of one or more consecutive prep-phrases (prepositional phrases), we need some sort of mechanism to prevent the parser from seizing the prep-phrase noun and declaring it as the $pre (or subject) of the verb. We could increment a variable to count the number of consecutive prep-phrases, and use the counted value to prevent the parser from declaring a subject $pre -- oh, wait. It might be better to have the parser look for the first occurring noun as the subject-$pre but to not designate as subject any noun preceded by a preposition. We could have code such as

if ($pos == 6) { # 2016mar18: after a preposition...
# 2016mar18: put the next noun on hold.
} # 2016mar18: end of test for a pos=6 preposition
The parsing mechanism for finding a noun as the $pre of a verb will no longer have to search backwards in time for a noun if the presumed subject-noun has already been captured as a $tsj (time-of-subject) value. We just need to prevent the designating of a $tsj for any noun that is only the object of a preposition and not the subject of a verb.

We could have code that would seize a $tprep (time of preposition) and use it upon encountering a noun to "split" the $tprep and insert the noun as a $seq, while inserting the preposition as a $pre for the noun. Or should we use the times rather than the concept-numbers?

If the parser holds onto the $tsj time-of-subject as the future recipient of a $seq-verb and for the future insertion as a $pre-subject for the verb, then really no prepositional phrases, however numerous and however concatenated, will need to employ a "skip" mechanism. The very use of specific time-points for designating the subject-noun and the verb and the indirect or direct object will automatically skip over any intervening verbiage, such as prepositional phrases and adjectives and adverbs.

2016 March 19:

In we made the new parser module able to deal with a prep-phrase (prepositional phrase) at the start of a sentence such as, "In chess a boy plays games." Now in we address prep-phrases intervening between subject and verb as in, "A boy in chess plays games."

2016 March 21: -- Detecting indirect and direct objects

With the new parser module we would like now to handle input with both indirect and direct objects, so that the Strong AI will assign the correct associative tags. However, the @psy conceptual array does not contain an indirect-object tag in its flag-panel for each row in the array. The $seq flag on a verb points to the direct object, but nothing points to an indirect object. Perhaps no such flag is actually necessary, but we should keep in mind the idea of adding it. We could perhaps use the pre-existing $jux flag, but a conflict could arise between the adverb "not" and an indirect object, as in a sentence like, "I do not give you anything." Maybe we could treat the indirect object as a $seq in relation to the direct object, as if "I give you something" means "I give something to you."

For the actual mechanics of detecting indirect objects, we could take the rather obtuse method of letting the first noun after a verb go by default into the classification of an indirect object, to be repudiated if no further noun comes in. Such an approach would work especially well with an input like, "I will show you." However, we could also treat the "you" in "I will show you" as a genuine direct object in English, if we expand the cognitive meaning of "show" to include the idea of "make an impression upon." Of course, in German or Russian the idea of "show" would still use the dative case for "you".

We certainly do not need to create a class of verbs that pertain specifically to indirect objects, because almost any verb can take on that role, as in for example, "I will build you a house."

No, a best approach might be to let an incoming second noun acquire the status of a direct object while "demoting" the first noun to the status of an indirect object.

Suppose we set $tio and $tdo simultaneously when a verb is followed by a first noun. If a second noun comes in, we can test $tio for being non-zero or positive, and use the positive test to change the $tdo to the newer noun. The $tio can remain as the time of the indirect object.

2016 March 26:

Today in we are thinking of introducing $iob (indirect object) as a new associative tag in the flag-panel for each row in the @psy conceptual array. We are concerned or worried that we need such an indirect-object tag for verbs if the Perl Strong AI is going to have the ability to re-generate a sentence stored in conceptual and auditory memory.

First we declare the $iob variable and we run the AI to make sure that Perl does not object to the name of the variable.

Then in TabulaRasa() we add one more zero to the @psy array. The AI still runs.

Then in KbLoad() we include $iob as the seventh seal. Ghost still runs, just like a Swedish movie.

2016 March 27:

In the prevous AI Minds, there were only two "pov" points of view: internal and external. Now in the Perl AI we will try to set up three points of view: 1) self; 2) dual; 3) alien. The self=1 pov is for the ego thinking as "I". The dual=2 pov is for another mind talking as "I" into the "you" of the AI. The alien=3 pov is for when the AI is using FileInput() to read a text that contains pronouns like "I" and "you" without letting them be re-interpreted as part of the "dual" situation.

Where in the AI program should the incoming word "707=YOU" be changed into the "701=I" concept? It should be somewhere between and including the AudInput and InStantiate modules. Oh, during AudInput we should set the $pov flag to the value of "2" for "dual" mode. Likewise, during thinking we should set the flag to "1" for "self" mode.

When we try to set up two conditionals to make the switch in the InStantiate module, the second conditional simply reverses the switch made in the first conditional. Maybe we should try using the OldConcept module.

In the OldConcept module, since $oldpsi turns into $psi, we base our conditional on the unchanging $oldpsi and so the second conditional does not reverse the first conditional.

if ($pov == 2) { # 2016mar27: during a pov "dual" conversation...
if ($oldpsi == 707) { $psi = 701 } # 2016mar27: interpret "YOU" as "I";
if ($oldpsi == 701) { $psi = 707 } # 2016mar27: interpret "I" as "YOU".
} # 2016mar27: end of test for other person communicating with the AI.
Then we have another problem because incoming 707=YOU is stored as 701=I but the auditory recall-vector still goes to the "YOU" engram in auditory memory. We have had this problem in the past with MindForth or with one of the other AI Minds.

2016 March 29:

Today in we are trying to get the AI to respond to user input with something resembling a grammatical English sentence. So first we must see if EnVerbPhrase() calls a direct object. Yes, it does, but first we must establish the system of using $subjectflag and $dirobj as flags during thinking. Still the AI does not output a direct object, so we need to check and see if OldConcept() is imparting activation to concepts mentioned during user input.

Although MindForth sets activation during EnParser, we might as well set the activation more forthrightly during InStantiate().

*****2016.MAR.27.Sun. --

In the prevous AI Minds, there were only two "pov" points of view: internal and external. Now in the Perl AI we will try to set up three points of view: 1) self; 2) dual; 3) alien. The self=1 pov is for the ego thinking as "I". The dual=2 pov is for another mind talking as "I" into the "you" of the AI. The alien=3 pov is for when the AI is using FileInput() to read a text that contains pronouns like "I" and "you" without letting them be re-interpreted as part of the "dual" situation.

Where in the AI program should the incoming word "707=YOU" be changed into the "701=I" concept? It should be somewhere between and including the AudInput and InStantiate modules. Oh, during AudInput we should set the $pov flag to the value of "2" for "dual" mode. Likewise, during thinking we should set the flag to "1" for "self" mode.

When we try to set up two conditionals to make the switch in the InStantiate module, the second conditional simply reverses the switch made in the first conditional. Maybe we should try using the OldConcept module.

In the OldConcept module, since $oldpsi turns into $psi, we base our conditional on the unchanging $oldpsi and so the second conditional does not reverse the first conditional.

if ($pov == 2) { # 2016mar27: during a pov "dual" conversation...
if ($oldpsi == 707) { $psi = 701 } # 2016mar27: interpret "YOU" as "I";
if ($oldpsi == 701) { $psi = 707 } # 2016mar27: interpret "I" as "YOU".
} # 2016mar27: end of test for other person communicating with the AI.
Then we have another problem because incoming 707=YOU is stored as 701=I but the auditory recall-vector still goes to the "YOU" engram in auditory memory. We have had this problem in the past with MindForth or with one of the other AI Minds.

2016 April 01:

Today in ghost121.html we need to improve the mechanisms for thinking in Russian so that users may see how Russian personal pronouns are properly interpreted as referring either to the self of the AI or to the human user interacting with the AI.

2016 April 03:

In we want RuVerbPhrase() to call VerbGen() for a missing verb-form. Inside VerbGen(), we need a new way to terminate the DO-WHILE loop, because in the Perl AI we are no longer using a continuation variable to tell us when a stored word is at an end. So, we run the loop "while" the $abc (AudBuffer transfer character) is not equal to a blank space, that is, as long as the characters in the word continue, before the end of the stored word.

2016 April 04:

When we type in "You bug me" in Russian with, the AI with VerbGen() is not properly changing the verb-form in response, apparently because the RuAudMem() module is not attaching an audpsi tag far back enough along the verb to cover the last character of the verb-stem.

2016 April 05:

In RuAudRecog there should perhaps be two initial searches, one for a matching character as the start of a stored word, and subsequent other searches for a matching activated character. Actually, there should be only one search per incoming character, but with varying conditional tests. For the first incoming character under the audrun=1 condition, there should be a check for something like a blank space before the stored engram at the start of a stored word. If there is a preceding blank space, a throw-away $monopsi may be declared.

When we input "ЗНАЮ", the initial "З" gets activated not only on words with the same verb-stem, but also on the preposition "ЗА". In "ЗА" the N-I-L (next-in-line) "А" gets activated, but will not match up with the "Н" coming in as part of "ЗНАЮ". When we are trying to match the "Н" with an activated "Н", perhaps we should de-activate any other activated character that is not the "Н", so that the "А" in "ЗА" gets de-activated. (Of course, we could encounter problems when a character occurs twice in a row, but let us deal with that problem later.)

2016 April 06:

Yesterday we finally got the Perl AI to respond properly to a Russian verb flanked by personal pronouns as subject and object, but the solution worked well only upon the very first input. Afterwizards the responses were garbled, as if we had not zeroed out the OutBuffer() quasi-array of sixteen variables holding the Russian-verb stem in a right-justified detente suitable for the changing of inflectional endings. Now in we should try to clear up the problems.

2016 April 07:

Today let us work on AI and let us work on fixing any one of the first bugs to pop up. So we start the AI and we type in the Russian sentence "Я ЗНАЮ ТЕБЯ" ("I know you"). When we press the [Enter] key -- doing it now -- we get "ТЫ ЗНАЕШЬ МЕНЯ" ("You know me") as the grammatically correct output of the AI, but there are two glitches or bugs visible in the display of conceptual memory for the human input. There is no recall-vector "$rv" being displayed for the Russian pronoun "Я" ("I") at the start of the human input, and also for the pronoun "ТЕБЯ" ("you") at the end of the input. This problem is a relatively new bug, because the recall-vectors were being displayed quite accurately up until a few days ago, when we made major changes to the RuAudRecog() Russian auditory recognition module. So now let us pay attention to the current diagnostic messages and let us insert additional diagnostics if we need them. As we inspect the diagnostics, we suddenly remember: the absence of the recall-vectors is not a bug; it is a feature! When the Russian word for "I" comes in, the software converts it to the internal "you" concept and stores a recall-vector of zero ("0") so that "you" will not fetch the stored word for "I". Case closed. Let us run the AI again in search of a real bug.

Since we can find no bugs to work on, let us move on in adding to the functionality of the Perl Strong AI. Let us code the tendency of the AI Mind to activate the self-concept of "I" if no other concept is currently active.

2016 April 08:

Today in we would like to work on re-entry. The Forth AI Minds accomplish re-entry by setting the pov flag to "internal" in Speech and by calling the AudInput module from Speech to send the output of the Mind back into the Mind. In the Perl Ghost AI, we must stop setting the $pov flag to "external" during the AudInput() module, because AudInput may receive input either externally from a human user or internally from the Ghost AI itself during the process of re-entry.

Uh-oh. Houston, we have a problem. The AudInput() module in Perl expects to be receiving input as "STDIN", but the re-entry process does not route itself through "STDIN". Who're you gonna call, Ghost? We may need to set up an actual ReEntry() module. So we set up a ReEntry() module but we still have a problem. The Ghost AI is not interpreting external "YOU" as internal "I". We may need to make sure that the point-of-view $pov flag is set to 2=external by the Sensorium() module. When we do so, the problem goes away for the interpretation of personal pronouns.

2016 April 09:

We want to flesh out the ReEntry() module in today. As we examine the JavaScript AI source code of the English AiMind.html and the Russian Dushka.html, it dawns on us that we can perhaps not avoid using the AudInput() module for both human input and cognitive re-entry. Let us try conditionalizing the use of "STDIN" during AudInput().

We forgot that the $msg string during AudInput() is the whole sentence of input, not each character one-by-one. Accordingly it did not work to send each $k[0] character individually from Speech() back into AudInput().

We will try now to use $idea to hold the AI output for re-entry into the AI Mind. However, let us try calling ReEntry() not from the Speech() module but from somewhere in the EnThink() module, because we want the Speech() module to have finished expressing the $idea.

When we use ReEntry() to send the $idea back into AudInput(), the result seems to be an infinite loop. Let us see if the loop will run its course.

2016 April 10:

In we will first try to get the AI to use parameters in finding the correct form of a be-verb.

Now in we would like to see if we can bring the AI closer to thinking out loud on the computer screen. However, we need to prevent the AI from going immediately into AudInput() mode. Let us try to impose an initial time period while the AI does some thinking first, and then waits for human input during AudInput().

Perhaps we can get the main large AI program to write Perl code to a file and then run that file as an input program, so that the main AI can think merrily along without waiting for human input.

We have been able to get one Perl program to write and run a new Perl program, but the input gets interpreted as an attempt to issue DOS commands. Perhaps we could have the main AI program create and run a smaller program that receives input and writes it to an "input.txt" file with a special alternating identifier, so that the main AI program will be able to keep on thinking while merely checking the input.txt file to see if the special identifier has toggled to its alternate form.

2016 April 11:

By telling the MainLoop in not to call Sensorium() until time $t is greater than 2500, we have made the Strong AI try to think on its own for a while before accepting user input. Then in our memory-display we see that the AI has spoken the word "I" but has only output "ERROR" instead of a verb and instead of a direct object or a predicate nominative. So we may have to start coding a SpreadAct() module to get the activation to spread from the pronoun 701=I to an associated verb.

Now that we have stubbed in the SpreadAct() module, we need to declare variables like $seqpsi to do the work of spreading activation.

2016 April 12:

As we develop the SpreadAct() module in, we should perhaps call the module only from the end of a sentence of thought, as for example, from the direct object of a verb. If the AI thinks a thought in response to user input and the idea of the AI goes by ReEntry() into the AI memory, and the human user then only presses [Enter], SpreadAct() could give rise to another idea as thought by the AI.

2016 April 13:

We would like to start using $verblock and $nounlock to preserve the logical integrity of an idea, but first we must implement the $tkb flag. Although in the old MindForth and the old Wotan we needed to conduct a search to find the tkv (now tkb) value in the InStantiate module, in the new Parser() module of the Ghost Perl AI we simply use the tvb time-of-verb value as the tkb value. (If necessary, we could expand tkb to tkbv and tkbn for both verbs and direct-object nouns.) Then we also use the $tdo time-of-direct-object in the Parser() module to set the direct-object $tkb within the verb flag-panel. With the $tkb set for a subject-noun to find its verb and for a verb to find its direct object, we should be able to work with the verblock and nounlock flags.

2016 April 14:

Earlier today we increased the diagnostic memory display to show both user input and AI output. Now in we have a problem where we enter "You are Andru" but the Ghost AI responds, "I ARE ANDRU." Obviously the EnVerbPhrase() module is not using parameters to fetch the proper form of the verb.

2016 April 15:

As of, the ReEntry() process may not be calling the SpreadAct() module, because the reentrant $idea is not going through the NLP generation process. However, the parts of speech in the $idea are transiting through the OldConcept() module and are indeed being classified as subject and verb and direct object. Therefore, at some point along the way of re-entry, it should be possible to capture and identify the direct-object concept in order to call SpreadAct() and thus keep a chain of thought going.

If we use SpreadAct() to keep a chain of thought going, it may be possible to prevent invoking AudInput() while the chain of thought is proceeding. We might have to inaugurate a $chaincon flag to keep from calling AudInput() until the chain of thought is exhausted and $chaincon goes from one down to zero.

Once the chain-of-thought begins, it seems difficult to stop it. We should perhaps let $chaincon increment itself and serve as a counter, so that we can stop the chain of thought when $chaincon reaches some arbitrary but low value.

2016 April 16:

Yesterday we created the $chaincon flag variable and we posted about it in the comp.lang.forth and newsgroups on Usenet. Although the "chain-of-thought condition" flag was working well to delay the calling of AudInput() while the Ghost AI did some thinking, the actual thinking was of low quality, because the conceptual activation-levels are out of whack. Today in we would like to improve the activation-levels and thus improve the thinking.

2016 April 17:

Our selection of what to code today is based on the principle of most pressing or obvious need. We have recently gotten the SpreadAct() module to take the direct object from the end of a sentence and to try to retrieve knowledge about that former direct object from the knowledge base (KB). However, we saw that the AI was tending to repeat the next idea over and over. Therefore we need to refine and adjust the operation of the SpreadAct mechanism. We mainly need to make sure that neural inhibition will prevent the repetition of the same idea over and over. We also need to embed safeguards so that, if the former direct object does not lead to its role as the subject of a stored idea, some other notion may come to mind in the AI.

In we inserted some neural inhibition code into the EnNounPhrase() module and suddenly the AI was able to follow a chain of thought by converting direct objects into subjects.

2016 April 19:

Although yesterday we implemented the ReJuvenate() module, it did not seem to work perfectly. We may have to remove the legacy $edge flag and use some other way of not moving just part of an idea backwards in memory. We could perhaps set up a buffer area beyond the $vault line and move things willy-nilly into the vicinity of the buffer-line, but subsequent rejuvenations might lose track of the buffer area.

We could try to make sure of moving only whole words and not parts of words in auditory memory, and then we could count on other exclusionary conditions in the AI code to not try to retrieve a partially stored idea.

2016 April 21:

Yesterday we ran the Ghost AI without input to see if any glitches occurred, and we solved two problems that appeared. Later, however, we discovered that the AI could not easily be dislodged from its own internal chain of thought, so we must adjust the various activation-levels.

2016 April 22:

We have a problem because AudRecog() is recognizing the "we" in the word "weird" and assigning a provisional recognition $prc tag for "we" instead of letting "weird" be treated as a previously unknown word in NewConcept(). A similar glitch is occurring for comparable Russian words in RuAudMem(). We need to figure out a way to cancel the $prc tag if the incoming word goes on at length and is not recognized as the longer word that it is. Now we have apparently solved the problem by adding some AudRecog() code that lets a $prc tag be assigned only when the comparand engram character has some activation on it. There may briefly be a $prc on the "we" in "weird" but not as additional characters come in, even if finally the whole word fails to be recognized as a known "old" concept in OldConcept().

2016 April 23:

Two days ago in we started showing three main depictions at the end of each thought in English by the AI: array-contents; Ghost output; and input prompt. Meanwhile we notice that Russian input is not being displayed in the same format. After much troubleshooting, it turns out that RuVerbGen() was not yet sending the inflectional endings into AudInput().

2016 April 24:

In we have a very specific problem. In Russian we enter "Я вижу тебя" for "I see you" and the AI incorrectly responds, "ТЫ ВИДИШЕШЬ МЕНЯ" when the answer should have been "ТЫ ВИДИШЬ МЕНЯ" for "You see me." We need to troubleshoot how the output verb ends up so garbled and mangled. It could be a very simple one-character or one-line fix, or it could be a vexing problem that will take hours to correct. Let us start by calling up the JavaScript Dushka AI and entering the same input in Russian. Immediately we see that the Dushka Russian AI answers correctly, so we know in advance that there is a solution in store for the Ghost Perl AI. Next we will examine the diagnostic messages created while the Ghost AI was thinking up its response.

From the diagnostics we see that the AI recognizes the input verb correctly as concept #1820. The AI should easily find the correct output form in memory. However, when we un-comment-out a pertinent diagnostic message, we see that RuVerbGen() is unnecessarily being called. The result is garbled because obviously the RuVerbGen() module is trying to generate a form like "ЗНАЕШЬ" which means "you know" but is not appropriate for "you see." By the way, we plan to let RuVerbGen() generate three different kinds of present-tense Russian verbs by detecting "А" for the stem of one conjugation and "У" for the stem of another conjugation, while letting most other Russian verbs be presumed to follow a default format. But those are future plans; now back to the present.

By uncommenting another diagnostic message, we see that the RuVerbPhrase() module did indeed find the correct verb-form in auditory memory, but somehow RuVerbGen() was needlessly called. We look to see if the same problem can be tested for "ЗНАТЬ" meaning "to know", but we discover that the MindBoot() sequence does not contain the full present-tense paradigm of the verb, so we can not perform the test.

We then remove an entire block of code and store it elsewhere to see what happens. The problem goes away, and the RuVerbPhrase() module does not needlessly call the RuVerbGen() module. No, we have to put the code back in, because other needed forms are not being created without it.

Finally it turns out that Speech() was not being called for the correct verb-form because a conditional test of $vphraud had been ended with a bracket beyond the call to Speech(), and so Speech() was not being called.

2016 April 25:

Baikonur, we have a problem. The Russian-thinking Ghost AI answers properly when we type in "Я знаю тебя" ("I know you") for the first input. Ghost responds, "ТЫ ЗНАЕШЬ МЕНЯ" for "You know me" in Russian. Then for a second input we type in "Я вижу тебя" for "I see you" and Comrade Ghost only says "ТЫ ЗНАЕШБ" ("You know") which is the wrong verb and which does not include a direct object. It looks as though some variable is holding over either the concept number or the auditory recall-vector for the previous verb.

2016 April 27:

If we enter an English sentence of which the final direct-object noun triggers SpreadAct() and for which there ought to be a verblock somewhere, how is that $verblock supposed to be found? Actually, it is not supposed to be found initially. If the spread-acted concept has enough activation to win the competition for selection in the EnNounPhrase() module, the $verblock should simultaneously be found.

2016 April 28:

The AI is not letting an input Russian sentence ("Я вижу студента" for "I see a student") go through SpreadAct() to retrieve the MindBoot() sentence "СТУДЕНТЫ ЧИТАЮТ КНИГИ" ("Students read books."). After the input, somehow RuVerbPhrase() is starting off with a $motjuste of 1820 for "see" instead of 1825 for "read".

2016 April 30:

In we have a strange problem with what happens when the SpreadAct() module is called. If we enter the English sentence "You see me", the AI sends the understood 707=YOU concept into SpreadAct() properly but the AI responds, "YOU ERROR MAGIC" instead of "YOU ARE MAGIC". When we try a similar input in Russian and we type in "Ты видишь меня" for "You see me", the AI responds "Я ВИЖУ ВИЖУ" ("I see see"). The Russian modules are not shifting to the "you" concept and they are not finding the correct direct object. But let us deal first with the problem in English. After some troubleshooting, in the EnVerbPhrase() module we comment out one line of code that was setting the $vphraud variable to zero, and the AI began to respond properly, "YOU ARE MAGIC". However, we may yet see that it is better for the AI to seek the correct verb-form than merely to accept the $vphraud value.

2016 May 01:

In we have a problem after two simple inputs. In English we enter "You see me" and the AI correctly sends the direct object into SpreadAct() and the Ghost AI responds, "YOU ARE MAGIC", because that thought is in the English MindBoot() sequence. However, as our second input we enter "ТЫ ВИДИШЬ МЕНЯ" ("You see me") in Russian and the AI incorrectly responds, "Я SEE YOU" in a mixture of Russian and English. Probably some variables are not properly being reset to zero between modules, but we need to troubleshoot and investigate.

Now with we are wondering why the input of "ТЫ ВИДИШЬ МЕНЯ" ("You see me") in Russian may not lead to the sending of the direct object into SpreadAct(). However, apparently the direct object is indeed being sent, but it does not receive enough activation to become the subject of a new response. Let us try to adjust the activation levels.

2016 May 21:

Today in we are thinking of making the Ghost Perl AI alternate between English and Russian as the default language of thought with each release of the AI onto the Web. However, we must inspect the MindBoot() sequence to see if there are enough Russian ideas present for the AI to start out thinking in Russian. Hmm, all we see at the end of MindBoot() is the Russian for "Students read books," an idea intended for a demonstration of the InFerence() module in Russian. Let us look at the old Dushka code in JavaScript to see if there are more ideas there. Yes, in the Dushka Russian AI code the bootstrap includes Russian for "You think something" and "People read books" and "Robots do work" and "I see nothing" and "I understand you" and "God knows everything." But before we add those ideas to the Perl MindBoot(), let us see what already happens when we change the default language setting from English to Russian.

The Russian functionality in the Ghost AI is not yet good enough for alternating between English and Russian as the default language, so we should just do some ordinary troubleshooting. Here is a problem. When we start up and we type in, "Я ВИЖУ ТЕБЯ" for "I see you," the Ghost AI erroneously responds, "Я ВИЖУ МЕНЯ" or "I see me" translated into English. In RuVerbPhrase() we insert

if ($verblock == 0) { return } # 2016may21; TEST
and it superficially solves the problem, because the AI outputs only the word "Я" for "I" and returns to the calling module without selecting a verb.

2016 May 22:

Today in we will try to deglobalize at least one variable, such as $actbase which is used in the English and Russian AudRecog modules. We are curious to see if it can be declared as a local "my" variable in both of the modules. When we comment-out the variable prior to de-globalizing it and we try to run the AI, we get ten angry lines of complaint and "Execution of aborted due to compilation errors." Well, excuuse me. Now let us see what happens when we use "my" in the English AudRecog module. We do so, and we still get five petulant lines of complaint and the same termination message. So let us use "my" also in the RuAudRecog() module. Hey, no more complaints from Strawberry Perl5. We were able to deglobalize $actbase into a local variable used in two different mind-modules.

2016 May 23:

As we look for another variable to de-globalize, $audbase comes under consideration, but we see that it carries information from mind-module to mind-module, so it should not be de-globalized.

The buffer-increment variable $binc is a better candidate for de-globalizing, since it plays a role only within the RuVerbGen() module. Since we are de-globalizing the variable, we now take some time to expand its explanation in the web-page of the Table of Variables.

2016 May 25:

In, we enter "Я вижу студента" but the AI answers "БОГ ЗНАЕТ ВСЁ", which is the same as what we added to the MindBoot() in the version. Previously, the AI would respond with "СТУДЕНТЫ ЧИТАЮТ КНИГИ", so something has gone wrong. It turns out that in the MindBoot() the word "БОГ" for "God" mistakenly had been assigned the concept number for "student", so the AI was erroneously switching from a discussion of "student" to a discussion of "God".

2016 May 27:

In we are trying to deglobalize the $prevtag variable, which is needed only in the InStantiate() mind-module. After a noun or a verb has been instantiated, $prevtag holds its concept-number ready to be inserted as a $pre tag, if needed, during the instantiation of a succeeding concept. Thus a verb can have a $pre back to its subject, and a direct object can have a $pre back to a verb.

2016 JUNE 17:

We would like to introduce auxiliary verbs now into the Ghost Perl AI, so that we may use the adverb NOT for the negation of sentences of thought. We need negation for the proper functioning of the InFerence mind-module, so that a refuted inference may be couched in negational terms, as in "God does not play dice with the universe."

When we rename as and we type in, "You know me," eventually the AI says, "I KNOW YOU." However, when we enter, "You do not know me," the AI soon says, "I DO YOU," because it has not treated the input of "do" as an auxiliary verb. It has also not dealt with the negational "not" adverb.

Based on what we see in MindForth and in the JavaScript AiMind.html, we need to introduce into the InStantiate() module some code that checks for "do" or "does" as an incoming auxiliary verb. Let us try inserting such code in the Parser() module. When we do so and we enter "You do not know me," the AGI no longer says "I DO YOU" and it eventually says, "I KNOW YOU," which is encouraging, because the negation of "not" has not yet been implemented.

Next we use $prejux and $jux to get the AI to insert a $jux value of "250" for the adverb "not" in the negation of a verb. Next we need to get the AI to use "not" in generating a negational sentence. We do so by roughing in some search code for 250=NOT in the EnVerbPhrase module and some seach code for 830=DO in the new EnAuxVerb() module. The AI starts to respond to negational input with negational output.

2016 JUNE 18:

There is a problem with the Ghost Perl AGI because we want the AI Mind to remember all its knowledge and not get stuck in a rut with a chain of thought due to faulty activation levels.

Normally the AGI receives a sentence of input and as a result the concepts of the input are highly activated. Then the thinking modules generate or retrieve a thought about the input. We would like to make the entry of a single noun, followed by [RETURN], not just activate the single noun but also feed it into the SpreadAct() module, for several reasons. If only the noun is active, the thinking modules are not able to generate or retrieve a thought. If we feed the concept-number of the noun into SpreadAct(), then there is a better chance of getting the output of any knowledge remembered about the noun. It is also helpful to be able to query the AGI just by entering a single noun.

Perhaps we could have an insurance policy of both activating input sentences and sending input nouns into SpreadAct(). But perhaps we should adopt an even more drastic policy, namely, of not counting on the input of sentences to generate a thought by means of activation-upon-input, but rather of generating a thought by having both the input-subject and the input-object sent into SpreadAct().

2016 JUNE 19:

The Perl AGI should output a thought stemming from only a limited set of sources: an idea stored in the knowledge base; a new idea generated by logical inference; or a sentence generated as the expression of information arriving from the senses, as for example when the AGI is describing what it sees in a visual scene. There should be no random and potentially erroneous associations from random subject to random verb and to random object.

As we start coding and we simply press [Enter] with no input to see what output results, the EnNounPhrase() module defaults to 701=I as a subject. Immediately a t=753 $verblock is found which locks the AI into an output of "I HELP KIDS" from the innate knowledge base. Currently the SpreadAct() module is being called from EnNounPhrase() when the AI outputs the 528=KIDS direct object, but perhaps the call should wait until after ReEntry() inserts the idea into the moving front of the knowledge base.

The output of a thought, even from memory, is the result of spreading activation and should not lead to more spreading activation until the same thought becomes a form of input during ReEntry(). There must be some way to delay the calling of SpreadAct() until a new-line or [Enter] is registered. In OldConcept() we could set the $actpsi with the $oldpsi value, but not call SpreadAct() until the end of the input or re-entry.

We have created a new $quapsi variable "by which" the final noun ($psi) from InStantiate() can go into SpreadAct() from the ReEntry() module and possibly spread activation to pertinent knowledge in the knowledge base of the AI memory.

2016 JUNE 21: Creating a diagnostic minddata.txt file

Now in we would like to implement some code that creates a minddata.txt file for diagnostic purposes when we press "Q" to "Quit" the AI. First, we get the AI to open an empty "minddata.txt" file.

Now we have developed the following block of code:

  if ($reversed =~ /[Q]/) {  # 2016jun21: enlarging quit-sequence
    my $fh = new IO::File; # 2016jun21: Perl_Black_Book p. 561
    print "Opening diagnostic minddata.txt file...\n";  # 2016jun21
    $fh->open(">minddata.txt") or die "Can't open: $!\n"; #2016jun21
    $tai = $vault;  # 2016jun21: skip the MindBoot() sequence.
    do {  # 2016jun21: make a loop
      print "t=$tai. psi=$psy[$tai], ";  # 2016jun21: show @psy concept array 
      print " aud= $ear[$tai], \n";     # 2016jun21: show @ear auditory array
      $fh->printf ("t=$tai. psi=$psy[$tai], aud= $ear[$tai], \n");  # 2016jun21: PBB p. 535
      $tai++;  # 2016jun21: increment $tai up until current time $t.
    } while ($tai < $t);  # 2016jun21: show @psi and @ear array at recent time-points  
    print "Closing minddata.txt file...\n";  # 2016jun21
    $fh->close;  # 2016jun21: Perl_Black_Book p. 561
    die "TERMINATE: Q means quit. \n";  # 2016jun21
  }  # 2016jun21: end of quit-sequence
We will comment out the above block of code which serves to create a minddata.txt file with the contents of memory beyond the "vault" area of the MindBoot(). The minddata.txt file will be useful for diagnostic purposes, but we comment it out so as not to create files on the computers of Netizens who download and run the perlmind.txt AI program. AI coders may re-activate the diagnostic file code for such purposes as seeing how activated each concept is and to check on how well the ReJuvenate() module is functioning. The ability to create the minddata.txt file hints at such future possibilities as saving and re-loading the state of the AI Mind, and making a remote copy not only of the Perl AI software but also of the contents of the AI Perlmind prior to the making of the "clone" of the AI.

2016 JUNE 23:

With our diagnostic log-file, we can check to see if the process of neuronal inhibition is setting too negative a level of activation on the concepts in a sentence of thought.

NOTE: The minddata log-file should be used to test how long an idea remains "submerged" in deep inhibition, until it can be brought back up into conscious thought with the SpreadAct() module. For instance, consider inputs like:

Human: You know boys.
(then, somewhat later:)
Human: Boys play games.
(then, a LOT later:)
An input ending in "boys" ought to be able to go through SpreadAct() and re-activate the idea, "BOYS PLAY GAMES". If not, the minddata.txt file should give some indication of why not.

2016 JUNE 24: Orchestrating Activation-Levels for Variety of Thought

In NounPhrase() we will try to let lack of activation default only to an instance of 701=I that has a positive verblock in the $k[7] position of a row in the @psy conceptual array.

We have concepts being inhibited in NounPhrase() when they are subjects and then elevated to high activation by SpreadAct() when they are direct objects. In the case of the 701=I default-to-ego subject, we would like to see all self-knowledge gradually get a chance to ascend in activation to the summit of consciousness. Therefore we may need to practice deep inhibition of a used subject and serious PsiDecay() of all active concepts, and we may have to revise SpreadAct() so that it still imparts activation to a recent direct object, but only to the already most activated engram of the erstwhile direct object. Or should we have SpreadAct() impart a measure of positive activation to all instances of the formerly direct-object concept, and let the motjuste-competition sort out the most active instance?

It looks like the default 701=I pronoun is not being inhibited when it is selected as a default subject in EnNounPhrase(). Thus the same ideas about ego in the MindBoot() vault keep getting selected over and over again, for lack of inhibition.

We have made the 701=I default ego-concept be inhibited, and we see now that achieves a better orchestration of the conceptual activation levels. For instance, we entered "You know boys" and "Boys play games." Soon and often the AI made the output "I KNOW BOYS". After neuronal inhibition had dissipated sufficiently, eventually the AI made first the output "I KNOW BOYS" and then "I AM ANDRU" and "BOYS PLAY GAMES".

2016 JUNE 26: Correcting misallocation of $tdo nounlock in Parser()

As we continue to orchestrate and smooth out the conceptual activation-levels in the Artificial Mind, we may need to change what the InStantiate() module does to a message of verbal input. We should perhaps impose activation upon the previous nodes of a noun-concept and not upon the current node being instantiated, for several reasons. It does not make sense to store the current input with full activation, because then the AGI Mind would simply have a tendency to repeat the idea entering the Mind as input. It makes more sense to activate the past nodes of a noun-concept, so that the AGI can generate a remembered idea about the input. Since we think of the concept as dwelling up and down the length of a long neuronal fiber, it makes sense to activate the whole chain of conceptual nodes on the quasi-fiber. We thus also bypass the SpreadAct() mind-module, because we take the activation from the input message directly to the stored ideas that happen to contain the concept being activated.

If we attach zero activation to the current node of a concept being instantiated, we make it easier to process input messages containing the question-words "who" or "what" and so forth. In the older AI Minds such as MindForth and the JavaScript AiMind.html, we used special code to catch the input of such query-words and to assign zero activation to them. If zero activation is the default condition for fresh input, then we may not need special handling for the query-words.

Now we seem to have discovered why the AI was making illogical "I AM" statements. The wrong $tkb is being stored with the instance of the 800=BE verb, so that the retrieval of the idea ends with a direct object immediately prior to the memory being retrieved, rather than with a correct predicate nominative. Then we discover that the problem is more pervasive. Troubleshooting leads to a correction of the setting of the $tdo (time-of-direct-object) variable as a nounlock in the Parser() module. The improvement is so salubrious that we prepare to upload the debugged code to the Web. In the current code, apparently inhibition is so deep that factual knowledge does not re-emerge until long after it has been input to the AI. We may soon adjust the depth of inhibition so that the AI quasi-consciousness has more immediate access to all the facts in its knowledge base. The being released has the memetic advantage of not yet displaying faulty thinking.

2016 JUNE 27:

The Ghost Perl AGI project is deeply involved in mind-design as we tinker with the most fundamental movements on the MindGrid. Before we change the methods of neuronal inhibition, let us simply lessen the depth of inhibition and see what happens when inhibition is more shallow. The results are very strange. In the InStantiate() mind-module under positive $inhibcon, when we change the deep "-48" inhibition to a shallow "-32", the input entry of "Boys play games" no longer gets retrieved before the t=4000 time-point just prior to the calling of the ReJuvenate() module. So we go in the other direction and we deepen the "-48" inhibition to an even deeper "-56" inhibition. Now suddenly the "BOYS PLAY GAMES" factoid gets recalled at t=3489, with a positive activation-level of "888" on the "BOYS" concept. We also notice that the Perl AGI had tried to start a sentence with the 501=I concept as the subject, but instead the idea of "BOYS PLAY GAMES" was retrieved. The 501=I ego-concept must have been too inhibited, so it yielded to the "BOYS" concept during the selection of a subject. A little later, at t=3537 the AGI retrieves "GAMES HELP BOYS". But we do not want the AGI to rely on profoundly deep levels of ReEntry() inhibition. Let us go back to "-48" inhibition and try something else.

In the English EnNounPhrase() mind-module, let us change the inhibition of a currently selected subject from a deep level of "-72" to a more shallow "-16". The upshot is that we get all the way to "t=4000" and still there is no retrieval of "BOYS PLAY GAMES". Let us go the other way and deepen the activation from "-72" down to "-80". Now we get "BOYS PLAY GAMES" at "t=3230" and the "BOYS" concept has an activation of "890" when it is selected from its "t=2457" input engram. Then we get "GAMES HELP BOYS" at "t=3263" and "GAMES" has an activation of "118" at its "t=2487" engram as an input subject. So when we inhibit the selected subject more profoundly, we obtain an earlier retrieval of entered facts of knowledge. But we would rather inhibit the ReEntry() of ideas and not the generation of ideas.

When we comment out the lines of code for the immediate inhibition of selected subjects, the AGI goes into a repetitious output of "I AM ANDRU" with a "t=533" verblock. However, during ReEntry() we have been inhibiting only nouns and not pronouns like 501=I. The AGI also gets into a repetitious output of "KIDS MAKE ROBOTS" with an even increasing activation on the "KIDS" concept at "t=583" in the MindBoot() sequence, so that the AGI can not let go of "KIDS" as a subject. In the minddata.txt file, we notice that "KIDS" is maintaining a consistently high activation, while "MAKE" and "ROBOTS" are being inhibited. In the InStantiate() module, let us try commenting out the call to SpreadAct() for a concept being instantiated. We also comment out a call from OldConcept() to SpreadAct(), because the AGI was locking on to a repetition of "KIDS MAKE ROBOTS" and then "BOYS PLAY GAMES" with higher and higher activations on an early engram of the "BOYS" concept.

When we remove all English (non-Russian) calls to SpreadAct() except from ReEntry(), we observe an interesting phenomenon. The AGI repeats "BOYS PLAY GAMES" over and over, starting with an activation of about "50" on "BOYS" until the level reaches "22" on "BOYS", after which the AGI says once, "I KNOW BOYS" with an activation of "18" on the 501=I concept. Then the AGI goes back to saying "BOYS PLAY GAMES" about ten times, while the activation on "BOYS" gradually drops again and the AGI says "I KNOW BOYS" another single time. Apparently "I KNOW BOYS" is resulting from the selection of 501=I as the default subject when no other concept is highly activated.

We are making progress here, because the minddata.txt file shows that ReEntry() is using $inhibcon to make InStantiate() set an arbitrary "-48" inhibition upon all the concepts of an idea being re-entered into the AGI Mind and therefore passing through the InStantiate() mind-module. We see in the minddata.txt log-file that PsiDecay() is decreasing the inhibition of the older engrams of the re-entered ideas. The older engrams of "BOYS" in "BOYS PLAY GAMES" are showing a positive activation of "39" at many time-points, apparently from when SpreadAct() passes activation to the direct object in "I KNOW BOYS".

2016 JUNE 28:

Today we have started coding as initially a copy of and we are abandoning the version of the Perl AGI from yesterday because we were not able to achieve a stable and properly functional version of the program after seven hours of intense coding. Nevertheless we have ideas which we hope to implement today for a worthwhile release of the Ghost Perl AGI code.

In we are creating the minddata.txt files with the entire contents of the @psy array from the $midway starting point, because we must work on the neuronal inhibition of engrams contained in the MindBoot() sequence.

We notice that the AGI might retrieve and output "I KNOW BOYS", but apparently subject-selection does not pass to the "BOYS" concept with an activation-level of 196 at t=2457, because the "ROBOTS" concept at t=2586 has an even higher "224" activation. The AGI has to say "I KNOW BOYS" several times to build up enough activation on the "BOYS" concept for it to be selected as a subject of "BOYS PLAY GAMES".

We also notice that in the English EnNounPhrase() module, there is sometimes a subject-psi that already has a "verblock" going into the mind-module. Perhaps we should zero out $verblock at the start of EnNounPhrase() and see what happens. We also zero out $subjpsi, and we stop getting pre-ordained subjects and verblocks.

Then we encounter a problem of a binary rut of repetition of "BOYS PLAY GAMES" and "GAMES HELP BOYS" over and over again. We propose to eliminate such a rut by imposing such deep inhibition on any selected idea-engram that only one imposition of activation from SpreadAct() will not be enough to immediately overcome the deep inhibition. Let us try having SpreadAct() impose 32 points of extra activation, and having EnNounPhrase() inhibit selected subjects down to "-90" points. We do so, and we see "BOYS PLAY GAMES" emerge when "BOYS" has a built-up activation of "154". We see "GAMES HELP BOYS" emerge when "GAMES" has an activation of "60". Then "BOYS PLAY GAMES" emerges again with an activation of "32" on "BOYS". Then we get "GAMES HELP BOYS" with an activation of "32" on "GAMES". We are back in the binary rut. Perhaps the InStantiate() module is imposing activation on "BOYS" and on "GAMES".

2016 JUNE 30:

We would like now to input three sentences starting with "I" and see if the Ghost Perl AGI can separately retrieve all three ideas from memory after expressing the idea "I know you." "You" as the direct object should cause SpreadAct() to re-activate the stored ideas. First we will tell the AGI "You know me" so that it will later say, "I know you." Let us also input ideas like "I have kids" and "I know robots" and "I see women".

In the InStantiate() module, we are having to let either or a noun or a pronoun provide a value to the $quapsi variable, so that ReEntry() can call SpreadAct() for a pronoun as direct object.

2016 JULY 01: Debugging the AudRecog() Mind-Module

The Ghost Perl AGI is misrecognizing the word "weird" as if it were "we", although it recognizes "boys" as the "589=boy" concept. In AudRecog() we need somehow to limit how long a $prc (provisional recognition) remains valid as a pattern-match on a word being entered into the @ear auditory memory array. We have used a new variable $prclen to determine when an input word has gone more than two characters in length beyond an erroneous provisional recognition. We then abandon the $prc provisional recognition.

The AGI now creates a remarkably stable MindGrid with old and new concepts in activational equilibrium. The human user may input simple Subject-Verb-Object (SVO) facts into the AGI and see the AGI retrieve the facts from its knowledge base (KB) when prompted by new inputs making mention of concepts related to the previous knowledge. If the Ghost Perl AI can now perform these intellectual operations with a handful of concepts, it can arguably perform the same mental operations with a billion concepts.

2016 JULY 03:

Today in we want to make the process of calling the SpreadAct() module more straightforward. However, we still need the SpreadAct() module for when we will use it to re-activate not only a verb-node, but also each $pre and $seq of the verb.

If we temporarily prevent the setting of $quapsi in the InStantiate() mind-module and we enter "I see kids", the AGI activates the 528=KIDS concept and outputs "KIDS MAKE ROBOTS". Why does not "ROBOTS" activate previous engrams of itself to cause the output of "ROBOTS NEED ME"? It is because the $pov is not external during ReEntry(). Let us temporarily make it not matter what the $pov is. Still we do not get "ROBOTS NEED ME", until we discover and fix a bug in the EnNounPhrase() mind-module. During selection of the $motjuste for a subject, as each candidate was considered, the comparand activation-level was not being re-set to the activation of each successive noun under consideration, and so the noun with the highest activation was not winning the competition for selection as a subject. When we inserted "$act = $k[1];" as a line of code to adjust the metric for comparison, "ROBOTS" won selection and we got "ROBOTS NEED ME" as output. Unfortunately, that comparison-bug had apparently been lurking there for about three months.

2016 JULY 04:

Somehow the ideas passing through ReEntry() and back into the Artificial Mind are building up fresh activation too quickly and thus being selected again as the subject of a thought too soon. Let us try to ameliorate the situation, first by lowering the activation imposed by the InStantiate() mind-module from "48" down to "32". When we do so, at first the problem seems to be corrected, but soon the AGI gets into a rut of saying "ROBOTS NEED ME" over and over again, while hundreds of points of activation build up on old and new engrams of the "ROBOTS" concept. It goes against our theory of mind-design to have no upwards limit on how highly a concept can be hyper-activated, so therefore in the InStantiate() mind-module let us change from imposing additional activation to imposing an absolute level of activation. No, that method still puts the AGI into a repetitive rut. Let us try again with additive activation, but with a much lower increment than "32".

In the InStantiate mind-module, we could try putting an upper limit on the possible activation of an old concept being instantiated. In that way, a host of old concepts could be competing for selection in the generation of an output, while the concepts at maximum activation would gradually lose their activation through PsiDecay(). We do so, but the attention of the Artificial Mind does not shift to the 701=I ego-concept, so we arrange for the impostion of activation not only upon nouns going through InStantiate(), but also upon pronouns. Then at the end of the word-entry-loop in the AudInput() module, we insert a call to PsiDecay() so as to slightly reduce the possibly maximum activation of other engrams just before the instantiation of any reentrant or incoming word. The Artificial Mind then shows a variety in its meandering chains of thought.

Here is the latest news from the development of Artificial General Intelligence (AGI) in Perl.

2016 JULY 06: Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).
At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,
at t=2426, 707=YOU has a negative "-46" activation.
At t=2430, 820=SEE has a negative "-46" activation.
At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46" points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of thought of the AGI.


[ . ] Stop having verbs like ДЕЛАТЬ as the default Russian verb in the RuVerbGen() module. Instead, use verbs like ГОВОРИТЬ as the default and treat verbs like ДЕЛАТЬ and like ТРЕБОВАТЬ as special paradigms to be recognized from their stems.

[ . ] For special Ghost Webserver AI ontologies, create little ontological files of special information.txt written in a format digestible by the latest Ghost AI software, so that the Ghost needs only to read each file in order to have knowledge about each subject.

[ . ] Make the Perl Ghost AI able to go out and surf the Web in search of information.

[ . ] Make the Ghost AI able to send and receive e-mail messages.

[ . ] Give the Ghost AI a Web presence where Netizens may interact on-line with the Perl AI Mind.

[ . ] Install the Ghost AI in a humanoid robot.

[ . ] Make the Ghost AI able to send an exact copy of its software and its memories from one webserver to one or more other webservers.

[ . ] Create graphic displays of the Ghost AI casually thinking or reacting to human input with an AI brainstorm. Make thinking visible.

1. MindGrid as Theater of Neuronal Activations

Influences upon AGI neuronal activation include:

Books for creating Perl AI:
Learning Perl, by Randal L. Schwartz and Tom Christiansen
Perl Black Book, by Steven Holzner
Perl by Example, Fifth Edition, by Ellie Quigley
Programming Perl, by Larry Wall, Tom Christiansen, and Jon Orwant
[Perl AI enthusiasts should write books on "AI in Perl".]

Page created: 12 April 2015
Return to top; or to
Perl6 AI User Manual or
Artificial Intelligence: Law and Policy
Mentifex asks White House Deputy Technology Chief Ed Felten to point out technology reporter
John Markoff of the New York Times at the Artificial Intelligence: Law and Policy workshop.
Many thanks to NeoCities.