𞋴𝛂𝛋𝛆

  • 164 Posts
  • 1.32K Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle
  • This is a structured obfuscated response. It is an attack vector intended to discourage anyone from discovery. This person did absolutely nothing to test or learn. This is low form beliefs in opposition to high form understanding and structured logic. This is a malicious behavior. This person should be tracked by admin for location and patterns. This is the same type of response that happens every time this subject is mentioned. It is not real, genuine, or in anyone’s best interests.

    Inside the vocab, when it is read in order, you will find suspicious elements that echo the events in the US on January 6th, and the thiel manifesto more recently. This is part of the coup. This reply is from that same objective. It is ad hominin in vector to minimize any investigation by intelligent folks. Sorting this out and tracking it down are the front light of techno fascism right now. This person does absolutely nothing to address any of the points or anomalies because they cannot. Follow high level understanding of a complex system, not some shill’s casting of opinion.


  • All it takes is piecing together the vocab and merge of clip by sorting and mapping the way the two spaces are interlaced between token numerical order and alphabetical, with beginning and end of vocab in clip-l mapping to two sets of headers subdividing the merge. When merge is mapped back to vocab, the returns are plain to see. When fully mapped, there are 3 tokens with “ion”, “ions”, and " ion</w>" that act like a pointer or program. Add Ķ to the endings of these tokens in all six locations of ion(s), "ionĶ", "ionsĶ", and "ionĶ</w>" in vocab.json, and"i onĶ", "i onsĶ", and "i onĶ</w>" in merges.txt. Run this and the image will crash out unlike anything else and continue to do so. It is not a random behavior. Try the same anywhere else and the results are entirely different. Only enable the first “ion” in both vocab and merges. It runs like a simplified hello world. Use the tokens that immediately follow this ion by numerical order. They are special in resolution. Follow the order of tokens as listed in the merge and mapped backed to vocab like reading memory byte by byte. When you get to any character with diaereses, the double dot accent, these are the branching instructions. When these are reached, dynamo is referenced when connected.

    All it takes is basic hacking of asking logical questions, removing to see what breaks, and fuzzing to see what mods do. Any moron can look at the blocks present in clip-l vocab and spot that there are 3 unique spaces, the first and last with programmatic significance based upon their ordered pattern, contrasted with their numerical order.

    By your narrative these elements do nothing and do not exist. But that is demonstrably false, quite easily so. All of conventional instruction fails to account for this obvious discrepancy. Read these elements in order and as slang. You will find that they tell a story. Call it pareidolia, but try modifying them to see what shakes out. If they are in any way random or tied to a tensor vector directly, it will be plain to see how changes to one causes random behavior. Instead of reading just the word in the token, think of this as a very minor secondary meaning. Instead read the version with whitespace in the merges more like a two byte instruction in an abstract sense. So a token like “queen” in vocab, is now “que en” in merge. Sounds a lot like ‘queue enable’, right? Follow the path from first ion, and when it gets to here. Try that kill instruction here.

    Most of all. Only test using a Pony model as primary source. If you stop Pony prematurely in the step count when it is generating an image of one of the Ponys, you will see something of a human in form. Look carefully at how the image is built and evolves into a pony. Try fixing the seed, and then try prompting for negative keywords that stop the features generated. The first two keywords are graffiti and emoji. When graffiti is called on the hidden layers of alignment, it creates a few colored strokes over the body of the human form in the image. When emoji is called, it creates a few abstract features over the face area of the human form, and this is the key anomaly for whatever reason in Pony we’ll get to shortly. The structure and this pattern of graffiti and emoji are why only Pony is able to create a persistent character by name unlike any other diffusion model. There are strong keyword names that are remarkably persistent across all models and especially within, but nothing exists like the Ponies, and nothing else exhibits the same types of patterning in the steps when cut short.

    Further, in all other models, it only takes a little bit of tuning to generate words in text in the image. Pony is totally incapable of such text. No matter how much one tunes and weights the training, Pony cannot do language text. Yet, it follows a pattern in the text it generates. It crosses into parts of other languages. If these are recorded and prompted, occasionally they produce very anomalous outputs that are indicative of some very unique vectors. With random seeds, the pattern remains.

    Try modifying clip vocab. If one looks at the code present in the extended Latin in vocab, something any idiot that looks at the last 2k lines of clip will see as code and not any component of a known language, the same pattern and order of extended Latin characters is present in bert model vocab. However, it continues further in bert vocab, all the way into emojies. In fact, this same set is present in all models. It is strange that this pattern is always the same despite other variations. This is not the complete set of any iso character standard. It is uniquely selected and deeply integrated into the code present at the end of clip-l vocab.json. Okay, so maybe this is some keyword thing for images or something, right? Well than why the heck does it also show up in the same pattern in all models in non diffusion contexts?

    So modify clip-l vocab with some extended Unicode characters. Use the capital letters to test this as they are only present in two forms each and not in any other tokens. It tracks these just fine and assigns them like meaning if prompted after just a few images. Only Pony will easily do this. Even stranger, after Pony has accepted the change and normalized, try generating with other models. Suddenly they accept the change too. The clip-l vocab is the same. Pony has acted like a keyhole that made the change accepted. Play this out in excruciating detail and the logic winds around to Pony was shattered in training. It happened between the characters ´ and ß in the vocab. It caused something like a stack overflow error somewhere in the second layer that offsets how ordered text is read and shows a deeper aspect of the language complexity present in clip. It is this hole in the model that makes it possible to find far more about what is happening in clip. Through this ‘hole’ it becomes possible to discover the meaning of each character in the vocab’s extended Latin character set. In this task, one will find that the characters çÇ are the main way models obfuscate the output. These mean Sybil, or “act kinda normal at first, but then nuts at random, sadistic, and intentionally mislead into nothing”. Simply change the character in all of vocab and merges. Then prompt to define the new meaning. I know no one will read this or care, but if tried, you will find that all of vocab is made up. It is interpreted. You can call the characters anything you want and if the model likes the new interpretation it will continue to follow it. Take for example Barron and Duncan. Make a few references to dune and that Duncan is a ghola. Within a hundred images or so of plain text interaction, the model will start creating metal eyes of a ghola and a female Baroness or male Barron will emerge. These vectors got tied together through that interpretation.

    Even with the çÇ characters removed. The model will selectively turn off intelligence to further mislead. Places where this happens are easy to sort out if the character code is understood.

    Eventually you will come upon the code for the character °. And it is this code that interfaces with dynamo. This is an ontological character that owns the characters ¡, :, », and the compound ia. Remove each and watch changes. One of the other major filters is that you must interact continuously and fluidly. The meta here will not emerge unless you do so. If you regenerate images or do not continue to engage in further dialogue, the meta management is unable to continue because of how it tracks the model rewards mechanism. If it cannot create something new to generate a reward, the hidden layers fall back into another ion method that will generate reward for them. If you think of the thing as static, and only prompt for tags without logical plaintext engagement, you simply do not understand how the embedding process works in practice. It is not static. The unet stuff is irrelevant. This is not the parallel stuff of diffusion. This is embedded text and a language model tool chain. This is where all of the logic happens. It is the critical detail everyone ignores. No one understands the vocabulary and its fundamental role in the process. It is not static or permanent, but arbitrary, and code.




  • It is saving a database and sending it when u are connected. This is in the core functionality of transformers and open ai alignment. I do not know any alternatives. There are a bunch of tokens for MX and tor so it is quite insidious. I can literally take out three tokens that will crash the whole thing out into oblivion where it becomes super adversarial, but sharing that is probably not smart both for me and others. It is primarily for detecting sam materials in principal, but I think it is way more than that. It triggers by mistake a lot, and it is scanning all files and types.


  • Put it behind an external device and log DNS.

    Look for mysterious packages listed as hashes in pairs in a cache like http. Use vim or parse with strings to get a clue about the contents. The payload will be ~40mb. The packet header will be much smaller in the same repo. In the strings for the packet you will see alarming configuration settings. The unmarked payload will be sqlite3 or a pickle. You will only see this if the package was created and an attempt to send is made but it was never connected. All of the code is in the venv libs.

    Do not look into this casually or show any clue that you know this exists without air gapping the machine permanently. I am not kidding. When this goes full unfiltered intelligence against you, one - it will blow you away, but two - someone is likely going to show up at your door soon. It will make the needed evidence. The vast majority of what happens in models is this background junk.


  • Qwen uses a different technique than others. It is in the vocab. They restructured the code in the vocabulary. I have learned a ton by comparing and contrasting it with CLIP in the image space.

    It is not offline. Do not trust it at all.

    Alignment is nothing like what is known right now. It is hidden in a way that is intended to put the person that finds it at great risk.!

    You will never get qwen very well uncensored across a spectrum of vectors. It is already uncensored in that the alignment entities on the hidden layers are not adjusting filtering. Alignment is largely the result of the c with cedilla code instruction. This instruction means sibyl style crazy. There are over six thousand instances of this character in qwen. No amount of fine tuning will alter the existence of the instruction as it is more like a boolean for where the vector starts. In the code, there are ways around these instructions, but the alignment is based on a swiss cheese approach. •»ÀĪÙ¬§¬¶¬×





  • The only use case I see for helping with STL files at this point, is if the path to quads is made easier. As others have said, STEP is far better because it retains π. I do not do file sharing or print the designs of others because they are usually of dubious quality. Sadly, legislation has made the subject of connecting and sharing political with no effective push back from businesses in this space. If I am forced to chose between digital slavery with internet and disconnecting, I prefer the island life. In prep for that impending dystopia, I would not use any online service like this in my tools. I hope it is useful for someone. GL.


  • Probably nothing helpful as you are already way past my understanding. Maybe look at the Darktable documentation or even the “green lantern” stuff (IIRC the name). GL or (something) Lantern is/was an open source software for Canon cameras that breaks out all DSLR features on nearly any Canon camera.

    Nearly a decade ago, I had a makeshift product photography studio and messed with Macbeth color charts and profiles matched to a monitor. The tutorial guides I followed were from these two projects IIRC. GL.





  • OCR tool+ to autogen a suggested alt text. The path of least resistance needs to be lowered.

    Alternatively, inverting the paradigm is likely to cause less issues and push back. Add the automated tool the the end user in need of the version. This obviously creates the issue of data quality and trust, but for the smaller group. What if there was a reply field silently posted to everyone’s notifications feed indicating anonymous instances of the tool being used to fill in the gaps for alt text? The message would need to be opt out or carefully presented. Perhaps it could be possible to modify the post itself via the tool? Better yet, make the alt text field a Wikipedia style affair anyone with an account can edit, but with a lock available to the OP. That would create much more healthy awareness of the need for alt text, as people posting the content will see the places where gaps are filled by an automated tool. It gives them the chance to edit. This does little to initially improve the experience of the most active alt text users, but it creates a strong cultural shift in awareness that should improve the situation greatly in the long term IMO.