No need to we’re buddies, if Goose couldn't get this thread locked then neither could I.(I should’ve probably added the /j)
After many careful deductions with my astounding intellect, I can confidently conclude the reply ban feature allows you to ban replies.You can't just drop a bombshell, what does this fabled reply ban feature do?
And then unlock it again after right??? Right????Actually maybe it would be a hilarious idea to lock this on April fools
Me, just finished it actually, it’s bloated but I’ve enjoyed it a lot!Lock it for a few hours, not permanently. Although the latter would be mad funny too. View attachment 42337
Is anyone listening to the new carti album?
And then unlock it again after right??? Right????
Yeah Skepta's feature was dope, made me wanna go listen to some old BBKMe, just finished it actually, it’s bloated but I’ve enjoyed it a lot!
I think it could’ve done well with a fewer tracklist, some songs go on for way too little for such a packed list.
That said, the features were amazing, it was so fun to see Uzi and Carti again.
Lock at random to keep people guessing, hahah.
My guess would be that the A.I. would be learned enough to be able to detect code and comprehend how the code works to be able to get the shift key, but since the A.I. is just generative text it probably can't actually "figure out" the code and just "decrypts" it with typical generative text. There's a decent chance it's actually spitting out whatever the correct answers to whatever it was trained on to recognize that cipher were. I can't imagine many people are consulting chatgpt to decrypt ciphers for them, so it's not like it'd have much room to learn beyond that initial training, and A.I. models that work on feedback like chatgpt aren't really capable of solving problems like that without tons of training, and even then there can still be lots of errors.Alright, so as some on this thread may know i am hosting a DnD campaign, and recently i made a fun little riddle for my players. So my players' characters have already met the main bad guy of my campaign who has been behind the scenes for most of the beginning of the story, my players don't even know his name yet so i decided that i'd give them a simple puzzle that would namedrop him early if they got it right. I hid clues into a small lore text i wrote that lead them to decrypt a poem i had encrypted with a caesar cipher, for funsies me and a friend tried to make chatgpt decrypt the poem and it can perfectly understand that it is, in fact encrypted with a caesar cipher, hell it can even deduce the shift key i used, but for some incomprehensible reason it can't actually decrypt the poem. When it tries to decrypt the poem it always almost gets it right but basically also pulls like half of the decrypted poem out of it's ass leading to some hilarious results, my favorite being when the bot "decrypted" an 8 letter long name to "pluto".
Now i am really, really interested in how on earth the bot can deduce the correct cipher, the correct shift key but then fuck up the "decryption" so royally.
What a story, mark.Alright, so as some on this thread may know i am hosting a DnD campaign, and recently i made a fun little riddle for my players. So my players' characters have already met the main bad guy of my campaign who has been behind the scenes for most of the beginning of the story, my players don't even know his name yet so i decided that i'd give them a simple puzzle that would namedrop him early if they got it right. I hid clues into a small lore text i wrote that lead them to decrypt a poem i had encrypted with a caesar cipher, for funsies me and a friend tried to make chatgpt decrypt the poem and it can perfectly understand that it is, in fact encrypted with a caesar cipher, hell it can even deduce the shift key i used, but for some incomprehensible reason it can't actually decrypt the poem. When it tries to decrypt the poem it always almost gets it right but basically also pulls like half of the decrypted poem out of it's ass leading to some hilarious results, my favorite being when the bot "decrypted" an 8 letter long name to "pluto".
Now i am really, really interested in how on earth the bot can deduce the correct cipher, the correct shift key but then fuck up the "decryption" so royally.
that actually makes a lot of senseMy guess would be that the A.I. would be learned enough to be able to detect code and comprehend how the code works to be able to get the shift key, but since the A.I. is just generative text it probably can't actually "figure out" the code and just "decrypts" it with typical generative text. There's a decent chance it's actually spitting out whatever the correct answers to whatever it was trained on to recognize that cipher were. I can't imagine many people are consulting chatgpt to decrypt ciphers for them, so it's not like it'd have much room to learn beyond that initial training, and A.I. models that work on feedback like chatgpt aren't really capable of solving problems like that without tons of training, and even then there can still be lots of errors.
i know you're referencing the room but i've been binging invincible and my brain automatically jumped to that when i read the name "mark"...What a story, mark.
think mark thinkthat actually makes a lot of sense
i know you're referencing the room but i've been binging invincible and my brain automatically jumped to that when i read the name "mark"...