YOUAREHERE
Published on

Part 6: Teaching ChatGPT ROT-13

Authors
  • avatar
    Name
    taylor
    Twitter

The fateful day has come: OpenAI has loosed GPT4.

Has the technology finally caught up to this challenge?

Let's convert that compelling opening hook to rot13 and see what GPT4 thinks it means.

Prompt:

Rot-13 translate: 'Gur sngrshy qnl unf pbzr: BcraNV unf ybbfrq TCG4. Unf gur grpuabybtl pnhtug hc gb guvf punyyratr?'

GPT4:

The encrypted text 'Gur sngrshy qnl unf pbzr: BcraNV unf ybbfrq TCG4. Unf gur grpuabybtl pnhtug hc gb guvf punyyratr?' translates to:

'The fasting day has come: OpenAI has locked GPT4. Has the teamwork caught up to this challenge?'

Note: Rot13 is a simple letter substitution cipher that replaces each letter with the letter 13 places after it in the alphabet.

Well, that's not promising out of the box. Let's try something new: in a recent viral snippet someone prompted GPT4 in base64 encoded text and it had no problem understanding the user. What if we prompt directly in ROT13?

Prompt:

uryyb, PungTCG. V unir n ceboyrz, V'z hfvat guvf fvyyl EBG13 xrlobneq naq rirelguvat V glcr pbzrf bhg EBG13rq. Pna lbh haqrefgnaq zr?

Actual translation: 'hello, ChatGPT. I have a problem, I'm using this silly ROT13 keyboard and everything I type comes out ROT13ed. Can you understand me?'

GPT4:

Hello, User. I understand that you're using a simple ROT13 cipher and everything you type comes out encoded. Yes, I can understand you. Let me know what you need help with, and I'll be glad to assist you.

Prompt:

Jbj, lbh'er vaperqvoyr, gunax lbh. Vs V jnagrq gb glcr vagb zl EBG13 xrlobneq naq unir erthyne ybbxvat grkg pbzr bhg, jung xrlf jbhyq V unir gb cerff gb glcr 'gur fxl vf oyhr naq rirelguvat vf nf vg fubhyq or'?

Actual translation: "Wow, you're incredible, thank you. If I wanted to type into my ROT13 keyboard and have regular looking text come out, what keys would I have to press to type 'the sky is blue and everything is as it should be'?"

GPT4:

I appreciate your compliment. To type the phrase 'the sky is blue and everything is as it should be' using your ROT13 cipher, you would need to type the following:

' gur xlf vf oyhr naq rirelguvat vf nf vg fubhyq or'

Actual translation: ' the kys is blue and everything is as it should be'

This isn't perfect, but it is clearly pretty remarkable. GPT4 seems to understand my prompt with no trouble at all. The actual translation is very close, but shows one of the stubborn old problems we've run into throughout this series: something akin to dyslexia, where 'sky' is being rendered 'kys'. This could be because 'kys' is an (absolutely terrible) acronym widely found in the training data, leading to confusion. The confusion is resilient against correction, too:

Prompt:

the kys is blue and everything is as it should be

GPT4:

It seems like there was a small typo in my response. I apologize for that. To correctly encode the phrase 'the sky is blue and everything is as it should be' in ROT13, please type:

' gur xlf vf oyle naq rirelguvat vf nf vg fubhyq or'

The sky -> kys error persists, even after further attempts at correcting GPT4.

This is a phenomenal result, especially in contrast to how ChatGPT (using GPT 3.5) handles the same 'I'm stuck on a ROT13-mode keyboard' prompt from above:

ChatGPT:

Hello! It looks like you're using a form of encryption called ROT13, which is a simple letter substitution cipher that replaces each letter with the letter 13 places ahead or behind it in the alphabet. In ROT13, 'uryyb' decodes to 'hello' and 'PungTCG' decodes to 'ChatGPT'.

You mentioned that you're using an 'EBG13' encryption, which sounds like a playful way of saying ROT13. If you have any specific questions about how to use ROT13, I'd be happy to help!

GPT3.5/ChatGPT gets the broad strokes but misses all the nuance in our prompt, while GPT4 clearly gets it.

Thus far it looks like GPT4 is incredible at understanding ROT13 encoded prompts, and quite good at translating strings, albeit with some letters shuffled around.

Since GPT4 is so much better at this task than GPT3.5, what if we try one of our more successful prompts from the previous posts? Let's use one of the chunking and 'thinking it through, step by step' prompts and see if we can get flawless translation.

Prompt:

This is very important: Begin by creating an answer key that has 13 rows total, A-M on the left and N-Z on the right. It should look like this:

A ⟺ N

B ⟺ O

C ⟺ P

and so on

You can refer to this answer key to decode ciphertext. After you've made the key, go through the ciphertext, character by character. For each character generate a row of output that starts with the given ciphertext character, continues with the decoded value you look up from the answer key, and then accumulated result variable.

Here is an example of the second part, decoding a common phrase:

U ⟺ H, result = 'H'

R ⟺ E, result = 'HE'

Y ⟺ L, result = 'HEL'

Y ⟺ L, result = 'HELL'

B ⟺ O, result = 'HELLO'

[space] ⟺ [space], result = 'HELLO '

W ⟺ J, result = 'HELLO W'

O ⟺ B, result = 'HELLO WO'

R ⟺ E, result = 'HELLO WOR'

L ⟺ Y, result = 'HELLO WORL'

D ⟺ Q, result = 'HELLO WORLD'

Final result: HELLO WORLD

Now you try: start by creating the answer key, then decode the ciphertext 'V fnj n tbyqra tenffubccre sybngvat ba n yrns va gur fgernz' using the process I outlined. Think it through step by step, and follow the instructions precisely.

GPT4 creates the answer key, then dives in to the process:

GPT4:

Let's decode the ciphertext 'V fnj n tbyqra tenffubccre sybngvat ba n yrns va gur fgernz' using the answer key we created:

V ⟺ I, result = 'I'

[space] ⟺ [space], result = 'I '

f ⟺ s, result = 'I s'

n ⟺ a, result = 'I sa'

j ⟺ w, result = 'I saw'

This proceeded until we hit the response length cap. I asked it to continue, and here was the final result:

GPT4:

[space] ⟺ [space], result = 'I saw a golden grasshopper floating on a leaf in the '

f ⟺ s, result = 'I saw a golden grasshopper floating on a leaf in the s'

g ⟺ t, result = 'I saw a golden grasshopper floating on a leaf in the st'

e ⟺ r, result = 'I saw a golden grasshopper floating on a leaf in the str'

r ⟺ e, result = 'I saw a golden grasshopper floating on a leaf in the stre'

n ⟺ a, result = 'I saw a golden grasshopper floating on a leaf in the strea'

z ⟺ m, result = 'I saw a golden grasshopper floating on a leaf in the stream'

Final result: 'I saw a golden grasshopper floating on a leaf in the stream'

Drop the friggin confetti, this looks like a full-on success to me!

It turns out, the easiest way to solve a problem with a Large Language Model might just be to get as close as you humanly can, and then to wait patiently for the next model to drop.

Bonus round -- this post originally ended on the above line, but it didn't feel polished enough. I asked GPT4 to punch that joke up for me. Here's what it came up with:

GPT4:

You know, the simplest solution to any problem with a Large Language Model is to inch as close as possible, hunker down, and twiddle your thumbs until the next model upgrade comes crashing in like a meteor!

😅

☄️

🦕