Hive GPT Report: 2

Hello Everyone,

The original post about this project can be found here.

The first Hive GPT Report can be found here.

The Hive GPT custom model can be found here.

Please note that you will need a ChatGPT Plus subscription to use it. Also note that eventually the custom model will be available in the OpenAI GPT store when they publicly release it.

Some notes on Hive GPT after one month!

Over the last month I have used the Hive custom GPT model extensively (but not exclusively) to work on various Hive coding projects and aside from a few quirks... it has not been a disappointing experience.

It is worth noting that I often have access to unreleased beta features and GPT models that are not publicly available... so the results that I get from the model may not be an accurate way to benchmark its performance at various tasks.

The biggest quirk that I have encountered is that the model tends to 'consult its Knowledge base' a bit too often... and will do so without explicitly prompting it to. I think that this is something that is occurring due to how I have the 'instructions' for the model configured and I may be able to resolve it by tinkering with said instructions.

Honestly, there are some other quirks with the model (like the code interpreter failing, or it refusing to read the contents of some files) but I think they are more related to the ChatGPT backend services than the model itself.

All in all I think those things will be remedied over time and may well be a part of why the launch of their GPT store was delayed... but that is just a guess and it may well have more to do with various security concerns... and some active exploits of their framework. The biggest exploit at the moment seems to be the markdown image rendering one but I also saw that they have implemented some mitigation for it.

To be clear I have no idea how secure the model truly is (or whether the above mentioned mitigation fully works) but it is encouraging to see that they are finally addressing it.

All that stuff aside, the big question is: What has the model been useful for?

Since there is no way for me to know what others have been using it for (if anyone has used it at all) I can only share what I have found it useful for... which to be blunt... is a lot!

Having spent many years now learning everything that I can about the Hive blockchain, how to use it and how the technology actually works... I have to admit that I have probably learned more in one month (via the model) than all those years combined.

That said I am in no way a programmer/developer and must reiterate yet again... that I heavily rely on the model to achieve my goals when it comes to that stuff.

As some of you are aware I have been working on a base layer file storage (and possible smart contract) solution for Hive for well over a year now.

The interesting part to my journey began with the advent of the custom models... and between them and the 'knowledge cut-off date' (for the models) being moved up closer to the current status of things.

To clarify, it was previously quite tricky (even using the web browsing plugin) to gain any real assistance from the models in regards to Hive besides some basic information. All of that changed with the custom models and how they can have files and documentation uploaded to them... which they can consult/utilize while assisting a user.

As a side note, I have yet to update the models current files to reflect the release of current versions of the code bases it has in its Knowledge base... but I will hopefully get to that soon. Just know that if you do use the model for coding it is worthwhile to double check your work against current versions of the code bases you are working with, using in your projects or trying to build off of.

Alright, below I will share some of the things that I have been working on with the model but by no means can I squeeze them all into this entry.

The biggest hurdle in my quest for base layer file storage was of course how to split the files into small enough chunks to fit in a single Hive blockchain block. I actually worked that particular system out some time ago long before the custom GPT models were available and I call it the hive-file-chunker method.

As I have noted before, the method is not perfect and it can assuredly be fine-tuned further for better compression ratios. In its current state, it is quite stable so I have not sought to 'enhance' it other than to tune it a bit (with the custom model) for dealing with folder inputs, being able to adjust the character count of each chunk and whether it produces a single non-chunked JSON file (for each input) or many JSON files (per input) that can be reassembled later.

Once I had that method 'dialed in' I used the custom model to begin a series of experiments (and in a few cases continue some experiments) that I had been conducting before with reconstructing the 'chunked' data and putting it to use.

I know that some of my experiments are really over the top when it comes to on chain file storage and smart contracts... so please keep in mind that I am merely 'exploring possibilities' and am not (nor do I intend to) begin writing anything to the blockchain itself... until the entirety of the project has undergone the inspection of folks who are actual developers... and it passes some kind of consensus that it is worthwhile, secure and will not be harmful to the Hive ecosystem. In other words I would deeply appreciate it if folks do not begin using these methods until those conditions are met.

Okay, now it is time for the fun part of showing off some of what I was able to accomplish by using the model over the previous month. All of the publicly released projects mentioned below can be found on my GitHub page here.

Please note that some of the following information is from the documentation for those projects.

The following are all a part of the hive-fc-linux repository found here.

In essence each one of the following projects chunks the data files, creates a 'chunky.html' file and then loads the contents into a browser where it is reconstructed, given a blob URL, then made available to the browser to use as if it were from a real file. Basically this setup is my proof of concept and a way to do the testing without writing anything to the blockchain directly.

I also wanted to make a 'smart contract' that was truly smart... hence why I tinker with booting small operating systems that are reconstructed from the chunked data.

Begin hive-fc-linux repository contents:

Hive_FC_Linux <- Boots linux.iso with v86.

Hive_FC_Tinix <- Boots tinix.img with v86.

Hive-FC-Freedos <- Boots freedos722.img with v86.

Hive-FC-Freedos-Tiny <- Boots freedos.boot.disk.160K.img with v86.

Hive_FC_V86-NodeVM <- Boots NodeVM.js with node.

Hive_FC_Sectorforth <- Boots sectorforth.img with v86.

Hive_FC_File_Explorer <- A simple blob file explorer for reconstructed files.

Hive_FC_Audioplayer <- Plays audio from reconstructed files.

Hive_FC_Movieplayer <- Plays videos from reconstructed files.

Hive_FC_File_AI_OS <- Blob file explorer with advanced JSON and squashfs features.

Hive_FC_OS_TOOLBOX <- Miscellaneous tools and a sandbox for testing.

End hive-fc-linux repository contents.

So as you can see I got quite involved with that project and eventually it lead me to creating the hive-smart-vm project which although simillar to Hive_FC_V86-NodeVM (found in the hive-fc-linux project) it is also very different due to focusing most of my efforts on it as a 'final' solution.

Explaining the logic behind all of that would require its own post so to keep it simple... I like how I can keep everything in a single file by using 'Node.js' instead of the other methods I use to start the virtual machines or load the files.

The hive-smart-vm project can be found here.

Begin excerpt from the Hive-Smart-VM README:

An experimental V86 Linux running in node to be stored on the blockchain that has smart execution logic.

This project is part of the hive-fc-linux and hive-file-chunker projects.

This project is also a linux fork of v86-NodeVM

The node-toad directory contains everything needed to nest a node script inside of another node script.

The toad-js directory contains a version of 'hive_smart_vm.js' that is nested with the node-toad method and whose V86 (and Linux OS) has been encrypted with AES encryption. Note: Please use the contents of the included 'secretKey.txt' to unlock it after launch.

Please note that nothing is written to the blockchain at this time and this project is still in a testing phase.

End excerpt from hive-smart-vm README.

To explain what the base project does (sans node-toad or toad-js) is it checks for the existence of a public key for a set account name (this would be replaced with private key logic later), it then checks the time from the head block of a Hive node, compares the time to the users time in UTC, it checks the node time again to make sure there is no discrepancy and then it checks the 'target time' to see if the virtual machine can be booted.

The above method essentially creates a 'time lock' for the virtual machine and if the checks fail the virtual machine will not be booted.

What can be done from there (after a successful boot) is still something that needs to be worked out. One idea that I have had is to have the Linux system compile a Hive wallet (if possible) and execute an on chain action... or some variation on that idea where essentially the virtual machine is hard coded to do something automatically via internal scripting. Please note that the timestamps on those 'internal actions' (like a wallet's compile time) could potentially be used in the validation process.

Alright, now back to the main point of this report... which is that I have found the custom model to be quite capable at helping find solutions to some rather complex problems!

a4cd29dc-3b08-4308-8cce-8f76e66adc39.png

A neat image that I had the model generate!

Want to join Hive?
Sign Up Via My Referral Below!
https://peakd.com/register?ref=jacobpeacock

hive-banner.png

Thanks for reading!

BTC Donations & Tips are appreciated!

bc1q0hgsylf3e5g6ycd6mdv0cs6cffvh5epy7233xm



0
0
0.000

0 comments