The only unbreakable cryptosystem known - the Vernam cipher

Variability Encryption

Variability Encryption
I created a mostly pseudo code of the encryption side of things, there will be some parts that refer to below because my lack of programming language training/skills and such. The Decryption part will have to come at a different time.

But in here I will discuss the reasons it works. First it is Symmetric Encryption, as the same key will encrypt and decrypt. Decryption notes to come later when I have time.

1) Plain Text/Known Text
A request for me to use a known text format was made, I saw it but have been working 60-70 hours a week and also trying to create the pseudo code. I can tell you that using a known text then modifying it wont work. Here is why: The Combinations will create a ternary base, the juggle and shuffle are designed to increase disorder. If there was one run of each process and the order was shuffle, juggle, combinations then it would be plausible that there could be a pattern which shows up. However the repetitions of the three cycles and the key derived pattern of each process inside the three cycles will create a very severe jumble of 1's and 0's even if our source was entirely 0's or 1's to start with.

2) Block Cipher attacks
The way the entire system works we have no real identifiable blocks to work with. The key is read in a dynamic manner which results in repetition use of the key to start at different portions of the key. Further the combinations phase has limits to string length but those limits allow a lot of variability in length of a specific portion being applied to. Inside the string lengths we further have a lot of variability in the number of combinations and the salts further shake up the variabilities in many manners. There is no block as it where except where the Shuffle occurs (well kinda) and that is not the primary function of the system. In theory it may be necessary to make blocks for the shuffle portion but those blocks will not function in the same manner as existing blocks which will require testing for each and every possible variation.

3) Attempts to use a similar key (or part of a known key)
These attacks of course can happen when there is a man in the middle attack where the key is actually made up of a long-term key between users, a mid-term key, and a short term key such as the Signal App uses. Or however many versions of multiple key portions. So assuming your man in the middle attack got half the key you try to apply methods to use it to get in. The first problem is that key lengths for multiple key portions does not need to have a fixed length. The idea of a fixed key length is laughable considering the methods used in variability encryption. A variable key length of 32 to 64 bits where the accessed portion makes up a fair to sizable portion thereof does not give them enough information to recreate the key size with any reliability without trying all possible sizes. Having a small key is not detrimental to the entire system either, wherein the difficulty increases with key size size admittedly but the variability system can start at 32 bits without issue.

4) Difficulty in Brute Forcing increases with file size
The Jugglee routine definitively increases the problem for decryption. Since it can apply to the whole of the file size at once without too much predicted issue this means that an enemy operator must process the whole file for three of the 9 processes and they have to accurately judge when the juggle process was used each of the three times. The combination stage dramatically increases difficulty as well since it can encode a lot of data in the larger string sizes thus making accurate string length detection a necessity. The shuffle is by far the easiest portion to decrypt where it is in theory but the other stages make accessing it properly very difficult

5) The key has other strengths
Due to the way the key works, where it identifies different string lengths and where it can be added infinitely to itself with a low possibility of exact repetition, this makes the key weaponized on its own. The attacker will require knowing how many repetitions of the key occurred and where the individual key portions were spliced properly to be able to use the information.

6) Statistical Attacks
There is, admittedly, a bit of a possibility of weakness to Statistical Attacks. The weakness however is very low versus the total capabilities of the system. You would need to know a significant portion of the key and have a known text example. Given both of those at the same time you could, in theory, be able to attempt to derive the sequence of the processes with enough effort. However I would say this effort will still be harder by far than AES 512.

7) Brute Force
Combinations make for a large spread you need to test, the juggle makes for tests to be done three times over the whole of the file, the shuffle makes you waste time and energy trying to derive the proper order of things. If you encrypt a megabyte, a not unheard of size (sarcasm), the attacker must account for all possible key lengths, all possible key variations, and because the way the combinations work they must also be able to predict to a fair extent the source of the data, yes the data itself helps create the encrypted results thanks to the combinations system. Thus the processor time would be in far excess of the time it would normally require for every iteration of the possible key and at all possible file sizes using the combinatorics, thus making for far in excess of processor time to crunch the whole code ^3. It becomes factorial in of fact which should definitely scare the crap out of cryptologists.

8)Yes some attacks will always succeed.
You can buy the password, you can beat the password out of someone, you can probably derive the sequences if you can watch the processor requirements, getting into the RAM while it is working will get you far, and so forth. However without full information, say it is an ATM speaking to the bank server and you are in the middle but the encryption code is hardwired, you will get nothing for it.

9) While a one time pad is obviously going to be the strongest the methods involved in Variability Encryption leave no doubt that if right now, right now, the atoms in the sun were made into a Super Computer (all of them) and the power doubled annually, a petabyte would never be successfully cracked before entropy destroys everything. AES 512 cannot say that with those standards, and yes I am bragging but dammit I feel good having found a statistical method that frankly cannot ever be decrypted with brute force unless the file size is small and the key small and the attacker knows that. By small I mean like 8 bits small or some silly small size like that.

10) Patent and Patent Pending. The lawyer tells me I have to include that in my works, I listen to my lawyer. If you want in you can negotiate with me.

11) I think, now I do not have proof as I have been exclusively working on this encryption routine and the other patent applications, that AES can be exploited due to the high storage potential of combinatorics. It will need a lot of my time but it feels right. But again not going down that squirrel hole until I get real code that shows how the system works so I can take it before some very rich individuals or one of the other Patent Applications gets attention as well.

_________________ PSEUDO CODE (Kinda) _____________________

Pseudo Code - Variability Encryption // Variability is key, there is nothing else. This won't be real Pseudo Code but it should suffice for most here. //
Start: Load Key, to be called Key_Card Load File to Encrypt, to be called Step_Zero // Process_1 = Combin_First // // Process_2 = Combin_More // // Process_3 = Juggle_Scramble // // Process_4 = Shuffle_Mixup // // Process_5 = Combin_Restore // // Process_6 = Juggle_Restore // // Process_7 = Shuffle_Restore // // Process_8 = Ternary_Binary // // Process_9 = Binary_Ternary // // The processes are the main methods involved in first encrypting, then in decryption // Hash Key_Card, to be called Hash_Key Process: // Effort is to get a value from 1 to 6 to generate a pattern of the processes above, assume if there is an error another process assigns a value to each via using the key to generate an option that is quasi random // Value of Hash_One = If Hash_Key ends with 7,8,9, or 0 divide by ten and drop decimal, else use last digit. Value of Hash_Two = If the first digit in Hash_Key is a 7,8, or 9 then (look at the next digit, if next digit is a one then look to the next digit) else use digit. Value of Hash_Three = If RoundDown (Divide Hash_Key by 5) = 7,8,9, or 0 then if (RoundDown (Divide Hash_Key by 4) = 7,8,9, or 0 then (RoundDown (Divide Hash_Key by 6)) else use the digit. Order_One uses Process_1, Process_2, Process_3, Process_4. Order_Two uses Process_2, Process_3, Process_4, Order_Three uses Process_2, Process_3, Process_4, Process_8. Process_1 = a, Process_2 = b, Process_3 = c, Process_4 = d, process_8 = e. //The order of each of Order_One, Order_Two, and Order_Three get determined here // Order_One = if Hash_One =1 then acd, else if Hash_One = 2 then adc else if Hash_One = 3 then cda else if Hash_One = 4 then cad else if Hash_One = 5 then dac else if Hash_One =6 then dca. Order_Two = if Hash_One =1 then bcd, else if Hash_One = 2 then bdc else if Hash_One = 3 then cdb else if Hash_One = 4 then cbd else if Hash_One = 5 then dbc else if Hash_One =6 then dcb. Order_Three = if Hash_One =1 then bcde else if Hash_One = 2 then bdce else if Hash_One = 3 then cdbe else if Hash_One = 4 then cbde else if Hash_One = 5 then dbce else if Hash_One =6 then dcbe. // Note that this makes the order of the processes per each of 3 distinct rotations difficult to predict and allows for an initial change into ternary during the first combinatorial phase and returns to binary at the end of the process // Create file: Process_Run Process_Run = Step_Zero { // Process_1 // Load Process_Run Load Key_Card //The basis for the keycard is simple, we identify how many bits we are going to use for the string length, then we use that to identify the possible length of the combinations portion of the key afterwards, we then see if there is going to be a salt and if there is a salt we read the next 3 bits. // Load last 3 bits of Hash_Key, find the Decimal +1 and save as Hash_Ke1 // This will result in a 1 to 8 value // // Declaring a few things that will be used but will be modified in the following processes // Str_Len = 0 Key_Run = Key_Card Salt_True = 0 Com_Pare = 0 Com_Cnt = 0 Str_Cnt = 0 Chk_Salt = 0 // Replacement_File.txt // // Replacement_File.txt will be a separate post for people, it will be a large file which will have a replacement table based upon Combinatorics in it. It will be designed upon a variety of sizes but it will not have a full and entire table, it should be sufficient for the purposes of people here to understand how it works however // { If Hash_Ke1 >0 & <3 then Str\_Len = 4, if Hash\_Ke1 > 2 & < 6 then Str_Len = 5, if Hash_Ke1 > 5 & < 9 then Str_Len = 8 } { If Key_Run does not have sufficient length then Key_Run = Key_Run + Key_Card Remove Str_Len bits from Key_Run and identify the Decimal + 1 value of these bits. This will be called Str_Cnt Com_Cnt = RoundDown (Log( Str_Cnt / 2) / Log (2)) If Com_Cnt < 4 then Com_Cnt = 4 //Next step helps analyzes Com_Cnt to see if it is small enough, reduces the length if it is not // { While //I am creating a repeating sequence that repeats until the if then is true // Load Decimal + 1 of Com_Cnt bits from Key_Run, value is Com_Pare If Com_Pare > RoundDown (Str_Cnt / 2) then Com_Cnt = Com_Cnt -1 else End } Remove Com_Cnt bits from Key_Run // The purpose of this code above is to get the decimal of the first portion of our string length bits and to get a decimal amount for our combinations count which will be half that, or less, of the decimal for the string length. // Chk_Salt = Remove 1 bit from Key_Run If Chk_Salt = 1 then remove 3 bits from Key_Run, these three bits become Salt_True Using Com_Pare identify Replacement_File.txt table section for the Ternary Replacement. Remove the identified bits from Process_Run as identified by the table inside Replacement_File.txt in match to the corresponding binary. Call the result Out_Put1 // This is using the table to identify a length section appropriate for the replacement then identifying the string section inside that would match our source which will then indicate what to replace it with // If Chk_Salt = 1 then ************* SALTS NEED TO GO HERE *********** // Some salts occur before the next process, some would after the next process. I am going to make a separate post about the salts // Fill Empty Spots in Out_Put1 by using appropriate length of Process_Run }
{ // Process_2 is very similar to Process_1, main differences will be it is already running in Ternary. // Load Process_Run Load Key_Card
Load first 5 bits of Hash_Key, find the Decimal +1 and save as Hash_Ke2
Str_Len = 0 Key_Run = Key_Card Salt_True = 0 Com_Pare = 0 Com_Cnt = 0 Str_Cnt = 0 Chk_Salt = 0 // Replacement_File.txt //
{ If Hash_Ke2 > 0 & < 4 then Str_Len = 4, If Hash_Ke2 > 3 & < 9 then Str_Len = 5, If Hash_Ke2 > 8 & < 14 then Str_Len = 6, If Hash_Ke2 > 13 & < 19 then Str_Len = 7, If Hash_Ke2 > 18 & < 24 then Str_Len = 8, If Hash_Ke2 > 23 & < 29 then Str_Len = 9, If Hash_Ke2 > 28 & < 32 then Str_Len = 10. //Longer possible string lengths in follow up repetitions, increases difficulty in statistical analysis and brute forcing significantly. // } { If Key_Run does not have sufficient length then Key_Run = Key_Run + Key_Card Remove Str_Len bits from Key_Run and identify the Decimal + 1 value of these bits. This will be called Str_Cnt Com_Cnt = RoundDown (Log( Str_Cnt / 2) / Log (2)) If Com_Cnt < 4 then Com_Cnt = 4
{ While
Load Decimal + 1 of Com_Cnt bits from Key_Run, value is Com_Pare If Com_Pare > RoundDown (Str_Cnt / 2) then Com_Cnt = Com_Cnt -1 else End } Remove Com_Cnt bits from Key_Run
Chk_Salt = Remove 1 bit from Key_Run If Chk_Salt = 1 then remove 3 bits from Key_Run, these three bits become Salt_True Using Com_Pare identify Replacement_File.txt table section for the Ternary Replacement. Remove the identified bits from Process_Run as identified by the table inside Replacement_File.txt in match to the corresponding binary. Call the result Out_Put1
If Chk_Salt = 1 then ************* SALTS NEED TO GO HERE *********** // Some salts occur before the next process, some would after the next process. I am going to make a separate post about the salts // Fill Empty Spots in Out_Put1 by using appropriate length of Process_Run } { // Process_3 // // The Juggle Routine increases the net cost for brute force attempts to total processor time * 2^n where n is the number of bits in the entire file to be encrypted. This is per cycle involved and if they get the order of processes correct. // Hash_Mark = Hash of Key_Card Len_Mark = Length of Hash_Mark divided by 2 rounded down Hash_Mark = Hash_Mark - Len_Mark Sort_Hash = Last 3 bits of Hash_Mark Done_Hash = Decimal +1 of Sort_Hash Hash_Mark = Hash_Mark minus Sort_Hash Trig_Cnt = Last three bits of Hash_Mark Jug_Start = 0 Tri_Dec = Decimal + 1 of Trig_Cnt Trig_1 = 0 Trig_2 = 0 Trig_3 = 0 Trig_4 = 0 Trig_5 = 0 Trig_6 = 0 Trig_7 = 0 Trig_8 = 0 Trig_? = 0 // see lower notes //
{ If Process_Run is Ternary then run sub_prss2, else run sub_prss2 If Done_Hash < 3 then Done_Hash = 3 // Examines to see if the system is in Ternary, should be obvious // } { // sub_prss1 // // Trig_Dec and Done_Hash are the main functions to determine length and number of triggers. // Load Process_Run Trig_? = ? // The above needs to be incremental in growth for Trig_1 to Trig 8, or make some sort of array? // { While // Trigger making //
Trig_Dec> 0 ; Read first three bits of key, if Trig_? = 000 then 00, if Trig_? = 001 then 01, if Trig_? = 010 then 10, if Trig_? = 100 then 02, if Trig_? = 110 then 20, if Trig_? = 101 then 12, if Trig_? = 011 then 21, if Trig_? = 111 then 11 // Note this is the extremely simple version // } { While Process_Run still has bits repeat sequence Remove Done_Hash trits from Process_Run, These are First_Trig Read First_Trig for first match to Trig_? values, when match then remove remainder after match to Sec_Trig. Read Sec_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Thrd_Trig Read Thrd_Trig for first match to Trig_? values, when match then remove remainder after match to Frth_Trig. Read Frth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Fith_Trig Read Fith_Trig for first match to Trig_? values, when match then remove remainder after match to Sxth_Trig. Read Sxth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Svth_Trig Read Svth_Trig for first match to Trig_? values, when match then remove remainder after match to Egth_Trig. // Whatever remains will go only into the 8th set in this version // End While when Process_Run is empty } { Process_Run = Reverse order of data for Sec_Trig, Frth_Trig, Sxth_Trg, Egth_Trig } }
{ // sub_prss2 // Key_Fun = Key_Card Trig_? = ? Load Process_Run { While Trig_Dec> 0 ; Remove Three bits from Key_Fun, becomes Trig_? //incremental increase function // // Design may use all strings as keys if odds happen correct with Binary, this is an extremely simple version // } { While Process_Run still has bits repeat sequence Remove Done_Hash bits from Process_Run, These are First_Trig Read First_Trig for first match to Trig_? values, when match then remove remainder after match to Sec_Trig. Read Sec_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Thrd_Trig Read Thrd_Trig for first match to Trig_? values, when match then remove remainder after match to Frth_Trig. Read Frth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Fith_Trig Read Fith_Trig for first match to Trig_? values, when match then remove remainder after match to Sxth_Trig. Read Sxth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Svth_Trig Read Svth_Trig for first match to Trig_? values, when match then remove remainder after match to Egth_Trig. // Whatever remains will go only into the 8th set in this version // End While when Process_Run is empty } { Process_Run = Reverse order of data for Sec_Trig, Frth_Trig, Sxth_Trg, Egth_Trig } } { Process_4 // Shuffle Process // // Shuffle is designed to just swap a key derived sized length of trits or bits //
Read Key_Card first three bits. This becomes Shfl_Len Shfl_Dec = Decimal + 1 of Shfl_Len If Shfl_Dec < 2 then Shfl_Dec = 2 // Above is how we decide what length of blocks are being replaced. // Totl_Left = Shfl_Dec Bin_Bin = 0 Key_Shfl = Key_Card { If Totl_Left > 2 then Bin_Bin = Log(Totl_Left) / Log(2) // As if using the log function in MS Word // Key_Req = Remove Bin_Bin bits from Key_Shfl Shfl_? = Decimal of Key_Req
// Problem Defining this. I will make a separate post showing how this would look but not how it would work in Pseudo Code // } }
// Decryption to come when I have the time, hopefully it is obvious to some //
___________________________________________SALTS LIST____________________________
During the combinatorics phase there can be additional methods, known as salts, to add to the source which will confound an attempt to break the encryption. These salts can be modified to use binary or ternary as needed.

The salts are:
Salt 1: add combination(s) at
This salt will be triggered by a 000 in the key, the next two to four bits of the key will determine where in the current set the combination will be fake whereas the length of the combination string determines the required bits.

Salt 2: ended combination, start new combination early
This salt will be triggered automatically when possible, it will not allow a previous combination to break size rules, aka 4 minimum string length and a maximum of 50% combinations inside the string length. This increases security to prevent detecting this salt. This salt will not be used if there is a marker for another salt (This one is disabled in the example as I am only human).

Salt 3: Simulate multiple smaller combinations
This salt will be triggered by a 001 in the key, if the combination is under 8 then it defaults to NO SALT, else it defaults to two distinct combination strings where you divide by 2, round down for the first, then remainder for 2nd to obtain the string sizes. Possible alternative includes using a marker in the key to allow for more divisions provided the string length is long enough.

Salt 4: Skip Combination entirely
This salt will be triggered by a 010 in the key. The size will be determined by the previous combination string lengths, if under 8 then it will be 10 string length, if the previous was over 8 string length it will be 6 string length. Also it is possible to use a math formula to do a size variable, or a hash value, or.. Similar to 8 except we still use the full length of the listed string.

Salt 5: Skip real combinations, insert fake combination
This salt will be triggered by a 100 in the key. The size will be determined by the previous combination string lengths, if under 8 then it will be 10 string length, if the previous was over 8 string length it will be 6 string length.

Salt 6: Can use 2 dimensions
This salt will be triggered by a 011 in the key. This results in a combination going down instead of left to right. This is a complexity issue, I've plans for up/down but is the encryption community ready for this complexity? This key can be avoided if the complexity is too much, it is unlikely that by making blocks for this function that any existing block attacks could find vulnerabilities to exploit.

Salt 7: This will invert the binary values in the next combination
This salt will be triggered by a 101 in the key.

Salt 8: In between fixed length combinations, where the leading combination string ends with a combination location, and the next string starts with a combination location you can put a complete blank string. This is contrary to 4 where we are using the assigned string length in full instead of a variable. This salt will be triggered by a 110 in the key.

Salt 9: This salt, if we are using ternary, alters which of the 0, 1, 2 values is being used to encode the combinations and what is doing in binary. This is a permanent flip or a temporary flip as desired or as built into function. This salt will be triggered by a 111 in the key.
________________________________________________Example Tables (paste from Excel)_______________________________
submitted by PHDEinstein007 to encryption [link] [comments]

[Ciphers for beginners] Chapter 2: Binary-to-text encodings

[Ciphers for beginners] Chapter 2: Binary-to-text encodings
In last chapter, we talked about how characters could be encoded using different numeral systems. We'll stay on the topic of encodings a bit longer to talk about binary-to-text encodings. Those algorithms are used by computers to exchange any sort of data using only a restricted set of characters. Nowadays, it is not rare to see them used in ARGs to encrypt messages.
Before we dive further into this topic, note that those encodings can be used to represent any data, not only text. You could use a binary-to-text encoding to represent the data of an image or a soundtrack, for example.


The most famous of those encodings is called Base64. If you read the previous chapter, then you might already guess where this is going. Base64 can be considered as a numeral system that uses 64 characters: A to Z, a to z, 0 to 9 and the characters + and /. With each of those is associated an index, using the index table below:
The Base64 index table
To encode using Base64, each character from the plaintext must first be converted to its corresponding ASCII binary code. Then, each group of 6 bits is converted to a Base64 character by using the index table.
Using Base64, the message "ARG" would be encoded as "QVJH".
How Base64 encoding works
Because bytes contain 8 bits each, 3 bytes (3 x 8 = 24 bits) can be encoded as 4 Base64 characters (4 x 6 = 24 bits). For this reason, most Base64 messages have a length that is a multiple of 4. A 65th character, =, is usually used at the end to pad the length of the message if need be.
There also exists a number of variants of Base64, using different characters in place of +, / and =. But don't worry too much about this: ARGs that use Base64 usually choose the most common version of it.


Other similar encodings exist, such as Base32, which uses only 32 characters: A to Z and 2 to 7, with again optional = for padding.
The Base32 index table
It works the same way as Base64 but the bits are grouped by 5 this time. 5 bytes of data (5 x 8 = 40 bits) are encoded using 8 Base32 characters (8 x 5 = 40 bits). If the length of the result is not a multiple of 8 already, it is usually padded using the extra character =.
Using Base64, the message "ARG" would be encoded as "IFJEO===".
How Base32 encoding works
Again, just like for Base64, note that there exists different variants of Base32, using different sets of characters.

Base58, Base91

Still in the same vein, you might on rare occasions encounter less used text-to-binary encodings.
For example, Base58 uses the same characters as Base64 but omits 0, O, I, l, + and /.
Base91 uses almost all printable ASCII characters except the space, the dash, the backslash and the apostrophe.
They work a bit differently than Base64 but considering they are rarely used, there’s probably no need to dive into their precise behavior. Just try to remember they exist, so that you’re ready once you encounter them!


Base36 is yet another binary-to-text encoding, this time using the characters A to Z and 0 to 9. It can in fact really be seen as a normal numeral system, where you count from 0 to 9, then goes on with A (10 in decimal) up to Z (35 in decimal). To write 36 in Base36, you would need to add another digit, leading to 10. And so on...
An interesting fact about this encoding is that it can be used to hide an alphanumeric message by actually decoding from Base36. Indeed, any alphanumeric message (with no space) is a valid Base36 number! For example, ARG in Base36 is 13948 in decimal.
Base36 to decimal conversion


Finally, let's also mention the encoding called Ascii85, sometimes referred to as Base85. The original version of it uses the characters 0 to 9, A to Z, a to u, !"#$%'()*+,-./:;<=>[email protected][\]^_, the backtick ​` and sometimes y and z.
Another version, created by Adobe, uses the same alphabet but added <~ and ~> delimiters around the code.
And last but not least, it exists a version called Z85, which uses the characters 0 to 9, a to z, A to Z, and .-:+=^!/*?&<>()[]{}@%$#.
The algorithm for Ascii85 is a bit more complex than the previous ones but explaining it is probably not needed. Don’t worry though, you can easily find online tools to do the decoding for you!

Quickly identify binary-to-text encoding

Phew, what a mess, right? Let's sum this up real quick.
When you encounter a coded message that contains a bunch of characters without space, ask yourself the following questions.
Does it have one or more = at the end? This is characteristic of Base32 and Base64.
Does it contain both uppercase and lowercase letters? Try Base64 or, less likely, Base58.
Does it contain only uppercase letters and digits but you can't find any 0, 1, 8, or 9? You should probably try decoding it with Base32.
Does it contain weird symbol characters? Looks like Base91 or a variant of Ascii85.
Does it start with <~ and/or ends with ~>? This is most likely the Adobe variant of Ascii85.
Those questions should help you narrow down the numerous decoding options at your disposal.


You can find online tools to decode all the binary-to-text encodings listed in this chapter. Here is a non-exhaustive list of them:
As you might have guessed, any base could actually be used to encode binary data, given the proper index table or algorithm. This chapter focused on the most popular ones, that you might encounter in ARGs.
See you in the next chapter!
submitted by Golmote to OrbisObscura [link] [comments]

More info but nothing new solved

So I'm gonna dump some info I have gathered one more time. Nothing too cool. I hope you find it useful and maybe this helps to decrypt some of the messages! For now, it's all been failure on my end!

Decrypted posts:
F04_nod.redd -> Even though it's decrypted, I'm not sure. It may be a key for something

Some nice resources:
> Cryptography
> ARGs/internet mysteries/creepypastas

> Steganography

- Analysis of a plain english text encoded in base32 against the long message:
>>>>>>>>>>>>>>>>>>PLAIN ENGLISH ENCODED LEN -> 653 IC -> 0.0369 ###################################### D -> 38 (12.14) 3 -> 38 (12.14) H -> 36 (11.50) X -> 32 (10.22) R -> 30 (9.58) P -> 29 (9.27) 8 -> 29 (9.27) M -> 27 (8.63) F -> 27 (8.63) 9 -> 27 (8.63) ###################################### W7 -> 15 (16.48) D3 -> 11 (12.09) PM -> 11 (12.09) 3R -> 11 (12.09) BX -> 8 (8.79) C8 -> 7 (7.69) X3 -> 7 (7.69) RA -> 7 (7.69) HK -> 7 (7.69) E3 -> 7 (7.69) ###################################### 3RA -> 6 (13.33) X3R -> 5 (11.11) 3RK -> 5 (11.11) 9BX -> 5 (11.11) KM3 -> 4 (8.89) 3DE -> 4 (8.89) W7Z -> 4 (8.89) 8W7 -> 4 (8.89) 7ZH -> 4 (8.89) XE3 -> 4 (8.89) <<<<<<<<<<<<<<<<< - The first message has only 32 different characters (23456789ABCDEFGHJKLMNPQRSTUVWXYZ) in a message that is 695 chars long which suggest some sort of Base32 encoding
- The second message has 13 words of 13 letters with a charset of 36 (if we count the space) different characters.
Some of the characters here are not present in the first message (012345689ABCDEFGHIJKLMNOPQRSTUVWXYZ)
- Could this be a matrix for a Hill Cipher? ->
- "To help one is to help all" -> may come from the law of one by ra
+ How to attack this:
- One way the first message could be encrypted is by using a custom base32 alphabet
  1. Set a randomly sorted base32 alphabet
  2. Decrypt the encrypted message using it
  3. Check the fitness of the result
  4. Modify the alphabet
Seems straight forward. You can't check all alphabet permutations because 32! = 263130836933693530167218012160000000
What do then?
- Define transformations of the alphabet like swapping elements, sliding pieces of the alphabet, shuffle chunks,...
- Swap failing characters in the alphabet (those that decrypt to non-printable characters)
- Define a fitness function that depends on the english frequencies of bigrams, trigrams or quadgrams. Or maybe one based on the printability of the output
- If we assume that most of the characters are in the range A-Za-z then we can set a rule:
Let us analyze the first character of the string: V
V can be any number from 0 to 31... or can it? See, if we assume that the first character is a letter (which may not be the case if the original text is shuffled before the encryption), and also a capital letter (maybe it's the "T" from "To help ..."), then we have a couple of restrictions on what V can be. Ascii uppercase letters have values ranging from 0x41 (01000001) to 0x5a (01011010) so V's value must be 01---.
The catch here is the space character (' ' 0x20 00100000) which doesn't start with 01 and can be frequent in the text. Other punctuation symbols have similar issues.

- My best guess here is that this seemingly random chunk of html is to be hashed and a key generated from that, or used as-is as some sort of encryption key.
- Source of the chunk ->

- The easy solution that seems to be to easy to be a solution -> THEJUNGLEBOOK
- Notice it's 13 characters long (can this be used with other 13-char long strings that are present throughout the subreddit?)
- Tried to use this as an OTP key for the water_swift with no luck using either the numbers or the letters (I think I did this, but give it a try just in case I didn't do it or did it wrong)

- A mime with a broom. It might not be an original image but, if it isn't, I haven't been able to locate the original.
- Outguess returns that no bits are available when attempting to decrypt. This seems weird, but I don't know if it means something
- Most of the pictures seem to be related with other ARGs/internet mysteries/creepypastas
- On the side of the right shack, we can see on the roof a strange drawing and XY
+ How to attack this:
- It might have some hidden info so you can check steganography software like outguess and attempt to recover the hidden information bruteforcing the key with a list of words. Since we don't know wether there is information hidden (or even if reddit compresses the images when uploading them, which would kill any chance of hiding stuff in it), this might lead to nothing.

- Charset (57): 03456789BCDEFGHIJKLMNOPQRSTUVWXZabcdefghijklmnopqrsuvwxyz
- 13 characters per string, 13 strings
- This could be a table of keys.
+ How to attack?
- No idea. So start with the basics:
- Frequency analysis:
 ('0', 9), ('r', 8), ('b', 7), ('q', 6), ('m', 5), ('H', 5), ('J', 5), ('9', 5), ('8', 4), ('g', 4), ('a', 4), ('N', 4), ('x', 4), ('F', 4), ('V', 4), ('h', 3), ('j', 3), ('E', 3), ('S', 3), ('e', 3), ('O', 3), ('v', 3), ('C', 3), ('f', 3), ('z', 3), ('7', 3), ('n', 3), ('o', 3), ('X', 3), ('W', 3), ('d', 2), ('K', 2), ('Z', 2), ('k', 2), ('B', 2), ('G', 2), ('s', 2), ('y', 2), ('Q', 2), ('c', 2), ('T', 2), ('5', 2), ('p', 2), ('i', 2), ('R', 2), ('l', 2), ('3', 2), ('I', 2), ('L', 2), ('P', 1), ('D', 1), ('U', 1), ('w', 1), ('4', 1), ('6', 1), ('u', 1), ('M', 1) 
- A similar analysis as with the Paige12 post can be done.
- The fact that it's 31 different characters may come from the fact that the message is not very long or it may be that there are only 31 characters in the alphabet.
- It's likely that the top message uses the same alphabet
- In the top message not all words are the same length (13 - 6 - 12 - 6 - 13)

- Another chunk of html.
- A similar one ->
- Might be an old implementation of some sort of MM Chat ->
- 859 chars long

- 13 hexadecimal character
- This could be a One Time Pad (OTP). In this case what you do is you take a string that is also 13 chars long and xor each character with a 13 char long key. To decrypt, xor the encrypted result with the key.
+ How to attack:
- Take all 13 char long strings and xor them against this.
- The problem with OTP is that unless you know the key, you can make the text say anything you want by decrypting it with the right key:
>> Let's assume I want this to be THEJUNGLEBOOK. What I need to do is xor each character of the string with the encrypted message. That gives me c1 db ea 0c c3 71 22 83 a2 7d 61 79 4d. I can now use this to xor the encrypted message and get THEJUNGLEBOOK as plaintext

- A link to a section of oocities and a series of numbers and words. Will do a bot to check them sites!
Found omega in Found 1313 in Found allag in Found omega in Found omega in Found 1313 in Found omega in Found 1313 in Found 1313 in 
- This was found crawling only the main index. The bot may need to go deeper underground!
- The words are 13 13 omega allag weinstein challa g57
- g57 may be another medical code ->

- 20 8-letter strings
- It may use the same algorithm as the first post

- Another chunk of html (1507 chars)
- Seems to come from facebook somehow. Similar code ->
- A c program which outputs something like this:
1313 >> 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 
- >> indicates the input I gave it
- This could be used to sort some of the characters in the encrypted messages.
- The encryption could be just writing the characters in order and then take them out in columns. For example:
- We want to encrypt ATTACKATDAWN
- We lay the text according to this pyramid
- Extract the text by columns -> ATAAWTCTWKDA
- To decrypt, just lay the text in columns and enjoy. Combine this with some sort of substitution to reduce your sanity levels!
- A python implementation:
def main(): n, i, c, a = (1, 1, 1, 1) print('1313') # Will break if a string is input n = int(input()) # Since python's range goes [1,n) we need to increase n by 1 # we need to add one to the top limit for i in range(1, n+1): for c in range(1, i+1): print('%d ' % a, end='') a += 1 print() return 0 
- Here starts the shit. The encrypted text (if it is an encrypted message), not only plays with the different characters, but it also adds formatting to the equation. Is this important? Can I ignore italics and other such artifacts? Probably not. Probably not...

- This looks like substitution + transposition. Are the spaces moved around to? If not, how many combinations that make sense of a one-letter - two-letter pair are there in english?
- Give it a try with the program of the stockwood post
- Check them candidate algorithms:
> Vigenere
> Autokey
> Beaufort
> Running key
> Hill cipher
> ADFGVX cipher
> Playfair cipher
> Moar ->

- This one seems to be a variation of the f04cb algorithm.
- If we arrange the characters into columns every 3 bytes we get:
 3d 41 74 3c 42 73 38 43 73 3a 43 71 36 40 72 36 45 76 3c 40 76 3d 47 78 35 a9 75 38 41 71 39 a3 74 35 41 71 39 a5 78 35 a9 77 35 42 77 39 a5 72 3d 42 76 3a 44 5a 39 a6 73 36 40 76 3a 48 74 36 40 74 37 
- All characters in the left column start with 3 (0011)
- Almost all characters in the middle column start with 4 (0100)
- Almost all characters in the right column start with 7 (0111)
- The fact that it's almost all points to a operation done to the characters (like a xor) and not to half-bytes just being added in between half-bytes
- It is impossible to represent more than 16 characters with half a byte. This means that, in case the xor operation was the last one done, you can't know in advance where the positions of the 0s and 1s in the first half of the byte are going to be. You could have some sort of one time pad so that it produces this result, but this option seems unlikely
- Those characters that don't follow the pattern could correspond with special characters (like \r, \n, \t,...)
- Since 3 seems to have some sort of significance, it could be that the operations are done to groups of 3 bits (or 6, 13, pick a number!)

welcome to our new home
- Just a text. 32 characters long if we take the spaces into account

- Not your typical Lorem Ipsum. It starts with the standard "Lorem ipsum" but then changes.
- All the words seem to be in the post seem to be in the original post. Maybe this is a clue for the code.
Maybe the text can be translated into numbers according to the place of the word in the original text (first occurrence). Then, maybe those numbers can be used for something. Maybe.
- The tree in the title may be a clue to use the stockwood program

- As with all images, check with outguess or similar tools
- Original image ->

- The text seems to be some dummy text
- You can check this by searching for example "Gi tractare ut ex concilia" in google
- Where this text comes from is unknown
- A longer version ->
- Another reference to oocities
- The text is from Kafka's Metamorphosis ->
- It also seems to be used as dummy text for html templates

- Crossword picture. Could this be used as a template for letters in some of the previous text?
- Following are the clues with the length of each word in the crosswords
3 The OG (6)
5 Lake (4)
10 Brookfeld (3)
12 The first coming (14)
14 The Great SF64 (4)
15 Moose (8)
16 Prefix and begin (3)
17 Justic (4)
1 Ema (5)
2 VolumeX (9)
4 Fasttrack (3)
6 Meet You There (5)
7 Frame (10)
8 The Lost (7 or 6)
9 Pursuit (7 or 6)
11 Microphone (4)
12 IceRen (5)
13 Jacket (4)
- Please check that the words have been transcribed correctly

- Looks like one of those transposition + substition ciphers I heard so much about...
- That EAW, has it something to do with the Ema of the creations_puzzle.jpg?
+ How to attack
- Get frequencies of letters and bigrams
- Get the Index of Coincidence
- Depending on what comes out cry or attempt something different
- First, try key sizes of 6 and 13 as they seem to be important numbers
- Try to guess some of the words that can be there. The longer the better!
- When fail, crouch into fetal position and keep crying.
- A program?
 // # # # # # # # y=0 // # # . . . # # y=1 // # # . . . . # y=2 // # . . . . . # y=3 // # # . . . . # y=4 // # # . . . # # y=5 // # # # # # # # y=6 x2 = x - 1 + ((y + FirstShift) % 2); x3 = x + ((y + FirstShift) % 2); 
- If it is an % represents modulo, then the second part can only be 0 or 1
- x3 = x2 + 1
- There is also a list of numbers and a sentence
- Is the reference a shady reference to cicada?
- Is it a reference to one of the spinoffs of cicada?

- Charset (24): _'ABCDEFGHIKLMNOPRSTUVWY (the _ represents the space)
- It's 143 characters long. 143 = 13*11
- The IC matches that of english so it could be that only transposition has been used
- Individual frequencies also match those of english (more or less, but good enough for such a small text)
- At least it's a double transposition (maybe more)
- There are 24 spaces which suggest that the sentence is 25 words long
- The presence of ' indicates that there is a n't or 's (are there more possibilities?)
- We don't know the key size.
- Key lengths -> first I'll try 13 13, 6 13, 13 6, 4 20, 20 4 and see what happens
- Maybe the key is in one of the previous messages with long strings or even the jungle book one
+ How to attack?
- Assuming this is a double transposition, follow this -> chapter 5.3.1 (page 68)
- If it's more than double transposition, I think the same attack vector still holds
- Check also William Friedman's literature on the subject

- Here is the reference to 1976. It could be a reference to the paper by Diffie and Hellman
- All strings are 8 chars long except for the first and the last (6)

- No idea. Has a comment that looks like a perl script. Haven't tried to run it (it would need a couple of files)

I'm going to add a small explanation on baseN numbers
What is baseN?
A base32, base64, base10,... is just the number of different characters you use to represent a number.
For example
 base10 base2 base16 12 1100 0xc 

This can be interpreted as:
 12 -> 1 * 10^1 + 2 * 10^0 1100 -> 1 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 0xc -> c * 16 ^ 0 
In the case of base 16 we need more characters than just the numbers from 0-9 so we use a-f for the 10-15 range. In this example c = 12.
This is the basic idea. However when you check the standard base64 implementations you can see that the encoded string has sometimes padding characters (=). This is a consequence of how bytes are encoded into base64 to optimize the performance of the algorithm. In the standard case, each of the characters of the base64 alphabet ( from 0 (binary 000000) to 63 (binary 111111).
So when you encode a string like MESSAGE to base64 first you transform it into bits:
 M E S S A G E 01001101 01000101 01010011 01010011 01000001 01000111 01000101 
Then, group them into 6bit numbers:
 >> 010011 010100 010101 010011 010100 110100 000101 000111 010001 01 
We need to add 4 zeros to complete the 6 bits groups. To indicate this we will add 2 = chars to the end of the string
 >> 010011 010100 010101 010011 010100 110100 000101 000111 010001 010000 
If we encode this according to the base64 value table we get:
 010011 010100 010101 010011 010100 110100 000101 000111 010001 010000 T U V T U 0 F H R Q == 

submitted by averagetheposter to whatisada387 [link] [comments]

3.6 Release of Python (Part 1)


This is a long doc covering every changes in python 3.6.If you do not know anything about the commands do not read this!

Manuel and other things of Python 3.6

Python 3.6

What’s New In Python 3.6

Editors: Elvis Pranskevichus [email protected], Yury Selivanov [email protected] This article explains the new features in Python 3.6, compared to 3.5. Python 3.6 was released on December 23, 2016.

Summary – Release highlights

New syntax features:
New library modules:
CPython implementation improvements:
Significant improvements in the standard library:
Security improvements:
*On Linux, os.urandom() now blocks until the system urandom entropy pool is initialized to increase the security. See the PEP 524 for the rationale.
Windows improvements:

New Features

PEP 498: Formatted string literals

PEP 498 introduces a new kind of string literals: f-strings, or formatted string literals. Formatted string literals are prefixed with 'f' and are similar to the format strings accepted by str.format(). They contain replacement fields surrounded by curly braces. The replacement fields are expressions, which are evaluated at run time, and then formatted using the format() protocol:
name = "Fred">>> f"He said his name is {name}."'He said his name is Fred.'>>>
width = 10>>> precision = 4>>> value = decimal.Decimal("12.34567")>>> f"result:
{value:{width}.{precision}}" # nested fields'result: 12.35'

PEP 526: Syntax for variable annotations

PEP 484 introduced the standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:
primes: List[int] = []
captain: str # Note: no initial value!
class Starship:
stats: Dict[str, int] = {} 
Just as for function annotations, the Python interpreter does not attach any particular meaning to variable annotations and only stores them in the annotations attribute of a class or module. In contrast to variable declarations in statically typed languages, the goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools and libraries via the abstract syntax tree and the annotations attribute.
Tools that use or will use the new syntax: mypy, pytype, PyCharm, etc.

PEP 515: Underscores in Numeric Literals

PEP 515 adds the ability to use underscores in numeric literals for improved readability. For example:
Single underscores are allowed between digits and after any base specifier. Leading, trailing, or multiple underscores in a row are not allowed.
The string formatting language also now has support for the '_' option to signal the use of an underscore for a thousands separator for floating point presentation types and for integer presentation type 'd'. For integer presentation types 'b', 'o', 'x', and 'X', underscores will be inserted every 4 digits:
'{:_}'.format(1000000)'1_000_000'>>> '{:_x}'.format(0xFFFFFFFF)'ffff_ffff'

PEP 525: Asynchronous Generators

PEP 492 introduced support for native coroutines and async / await syntax to Python 3.5. A notable limitation of the Python 3.5 implementation is that it was not possible to use await and yield in the same function body. In Python 3.6 this restriction has been lifted, making it possible to define asynchronous generators:
async def ticker(delay, to):
"""Yield numbers from 0 to *to* every *delay* seconds.""" for i in range(to): yield i await asyncio.sleep(delay) 
The new syntax allows for faster and more concise code.

PEP 530: Asynchronous Comprehensions

PEP 530 adds support for using async for in list, set, dict comprehensions and generator expressions:
result = [i async for i in aiter() if i % 2]
Additionally, await expressions are supported in all kinds of comprehensions:
result = [await fun() for fun in funcs if await condition()]

PEP 487: Simpler customization of class creation

It is now possible to customize subclass creation without using a metaclass. The new init_subclass classmethod will be called on the base class whenever a new subclass is created:
class PluginBase:
subclasses = [] def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) cls.subclasses.append(cls) 
class Plugin1(PluginBase):
class Plugin2(PluginBase):
In order to allow zero-argument super() calls to work correctly from init_subclass() implementations, custom metaclasses must ensure that the new classcell namespace entry is propagated to (as described in Creating the class object).

PEP 487: Descriptor Protocol Enhancements

PEP 487 extends the descriptor protocol to include the new optional set_name() method. Whenever a new class is defined, the new method will be called on all descriptors included in the definition, providing them with a reference to the class being defined and the name given to the descriptor within the class namespace. In other words, instances of descriptors can now know the attribute name of the descriptor in the owner class:
class IntField:
def __get__(self, instance, owner): return instance.__dict__[] def __set__(self, instance, value): if not isinstance(value, int): raise ValueError(f'expecting integer in {}') instance.__dict__[] = value # this is the new initializer: def __set_name__(self, owner, name): = name 
class Model:
int_field = IntField() 

PEP 519: Adding a file system path protocol

File system paths have historically been represented as str or bytes objects. This has led to people who write code which operate on file system paths to assume that such objects are only one of those two types (an int representing a file descriptor does not count as that is not a file path). Unfortunately that assumption prevents alternative object representations of file system paths like pathlib from working with pre-existing code, including Python’s standard library. To fix this situation, a new interface represented by os.PathLike has been defined. By implementing the fspath() method, an object signals that it represents a path. An object can then provide a low-level representation of a file system path as a str or bytes object. This means an object is considered path-like if it implements os.PathLike or is a str or bytes object which represents a file system path. Code can use os.fspath(), os.fsdecode(), or os.fsencode() to explicitly get a str and/or bytes representation of a path-like object. The built-in open() function has been updated to accept os.PathLike objects, as have all relevant functions in the os and os.path modules, and most other functions and classes in the standard library. The os.DirEntry class and relevant classes in pathlib have also been updated to implement os.PathLike. The hope is that updating the fundamental functions for operating on file system paths will lead to third-party code to implicitly support all path-like objects without any code changes, or at least very minimal ones (e.g. calling os.fspath() at the beginning of code before operating on a path-like object). Here are some examples of how the new interface allows for pathlib.Path to be used more easily and transparently with pre-existing code:
import pathlib>>> with open(pathlib.Path("README")) as f:... contents
=>>> import os.path>>>
os.path.splitext(pathlib.Path("some_file.txt"))('some_file', '.txt')>>>
os.path.join("/a/b", pathlib.Path("c"))'/a/b/c'>>> import os>>>

PEP 495: Local Time Disambiguation

In most world locations, there have been and will be times when local clocks are moved back. In those times, intervals are introduced in which local clocks show the same time twice in the same day. In these situations, the information displayed on a local clock (or stored in a Python datetime instance) is insufficient to identify a particular moment in time. PEP 495 adds the new fold attribute to instances of datetime.datetime and datetime.time classes to differentiate between two moments in time for which local times are the same:
u0 = datetime(2016, 11, 6, 4, tzinfo=timezone.utc)>>> for i in range(4):...
u = u0 + i*HOUR... t = u.astimezone(Eastern)... print(u.time(), 'UTC =',
t.time(), t.tzname(), t.fold)...04:00:00 UTC = 00:00:00 EDT 005:00:00 UTC = 01:00:00
EDT 006:00:00 UTC = 01:00:00 EST 107:00:00 UTC = 02:00:00 EST 0
The values of the fold attribute have the value 0 for all instances except those that represent the second (chronologically) moment in time in an ambiguous case.

PEP 529: Change Windows filesystem encoding to UTF-8

Representing filesystem paths is best performed with str (Unicode) rather than bytes. However, there are some situations where using bytes is sufficient and correct. Prior to Python 3.6, data loss could result when using bytes paths on Windows. With this change, using bytes to represent paths is now supported on Windows, provided those bytes are encoded with the encoding returned by sys.getfilesystemencoding(), which now defaults to 'utf-8'. Applications that do not use str to represent paths should use os.fsencode() and os.fsdecode() to ensure their bytes are correctly encoded. To revert to the previous behaviour, set PYTHONLEGACYWINDOWSFSENCODING or call sys._enablelegacywindowsfsencoding().

PEP 528: Change Windows console encoding to UTF-8

The default console on Windows will now accept all Unicode characters and provide correctly read str objects to Python code. sys.stdin, sys.stdout and sys.stderr now default to utf-8 encoding. This change only applies when using an interactive console, and not when redirecting files or pipes. To revert to the previous behaviour for interactive console use, set PYTHONLEGACYWINDOWSSTDIO.

PEP 520: Preserving Class Attribute Definition Order

Attributes in a class definition body have a natural ordering: the same order in which the names appear in the source. This order is now preserved in the new class’s dict attribute. Also, the effective default class execution namespace (returned from type.prepare()) is now an insertion-order-preserving mapping.

PEP 468: Preserving Keyword Argument Order

**kwargs in a function signature is now guaranteed to be an insertion-order-preserving mapping.

New dict implementation

The dict type now uses a “compact” representation based on a proposal by Raymond Hettinger which was first implemented by PyPy. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in bpo-27350. Idea originally suggested by Raymond Hettinger.) PEP 523: Adding a frame evaluation API to CPython While Python provides extensive support to customize how code executes, one place it has not done so is in the evaluation of frame objects. If you wanted some way to intercept frame evaluation in Python there really wasn’t any way without directly manipulating function pointers for defined functions. PEP 523 changes this by providing an API to make frame evaluation pluggable at the C level. This will allow for tools such as debuggers and JITs to intercept frame evaluation before the execution of Python code begins. This enables the use of alternative evaluation implementations for Python code, tracking frame evaluation, etc. This API is not part of the limited C API and is marked as private to signal that usage of this API is expected to be limited and only applicable to very select, low-level use-cases. Semantics of the API will change with Python as necessary.

PYTHONMALLOC environment variable

The new PYTHONMALLOC environment variable allows setting the Python memory allocators and installing debug hooks. It is now possible to install debug hooks on Python memory allocators on Python compiled in release mode using PYTHONMALLOC=debug. Effects of debug hooks: Newly allocated memory is filled with the byte 0xCB Freed memory is filled with the byte 0xDB Detect violations of the Python memory allocator API. For example, PyObject_Free() called on a memory block allocated by PyMem_Malloc(). Detect writes before the start of a buffer (buffer underflows) Detect writes after the end of a buffer (buffer overflows) Check that the GIL is held when allocator functions of PYMEM_DOMAIN_OBJ (ex: PyObject_Malloc()) and PYMEM_DOMAIN_MEM (ex: PyMem_Malloc()) domains are called. Checking if the GIL is held is also a new feature of Python 3.6. See the PyMem_SetupDebugHooks() function for debug hooks on Python memory allocators. It is now also possible to force the usage of the malloc() allocator of the C library for all Python memory allocations using PYTHONMALLOC=malloc. This is helpful when using external memory debuggers like Valgrind on a Python compiled in release mode. On error, the debug hooks on Python memory allocators now use the tracemalloc module to get the traceback where a memory block was allocated. Example of fatal error on buffer overflow using python3.6 -X tracemalloc=5 (store 5 frames in traces):
Debug memory block at address p=0x7fbcd41666f8: API 'o'
4 bytes originally requested The 7 pad bytes at p-7 are FORBIDDENBYTE, as expected. The 8 pad bytes at tail=0x7fbcd41666fc are not all FORBIDDENBYTE (0xfb): at tail+0: 0x02 *** OUCH at tail+1: 0xfb at tail+2: 0xfb at tail+3: 0xfb at tail+4: 0xfb at tail+5: 0xfb at tail+6: 0xfb at tail+7: 0xfb The block was made by call #1233329 to debug malloc/realloc. Data at p: 1a 2b 30 00 
Memory block allocated at (most recent call first):
File "test/", line 323
File "unittest/", line 600
File "unittest/", line 648
File "unittest/", line 122
File "unittest/", line 84
Fatal Python error: bad trailing pad byte
Current thread 0x00007fbcdbd32700 (most recent call first):
File "test/", line 323 in test_hex
File "unittest/", line 600 in run
File "unittest/", line 648 in call
File "unittest/", line 122 in run
File "unittest/", line 84 in call
File "unittest/", line 122 in run
File "unittest/", line 84 in call
(Contributed by Victor Stinner in bpo-26516 and bpo-26564.)

DTrace and SystemTap probing support

Python can now be built --with-dtrace which enables static markers for the following events in the interpreter: * function call/return * garbage collection started/finished * Line of code executed.
This can be used to instrument running interpreters in production, without the need to recompile specific debug builds or providing application-specific profiling/debugging code. More details in Instrumenting CPython with DTrace and SystemTap.
The current implementation is tested on Linux and macOS. Additional markers may be added in the future.
(Contributed by Łukasz Langa in bpo-21590, based on patches by Jesús Cea Avión, David Malcolm, and Nikhil Benesch.)

Other Language Changes

Some smaller changes made to the core Python language are: A global or nonlocal statement must now textually appear before the first use of the affected name in the same scope. Previously this was a SyntaxWarning.
It is now possible to set a special method to None to indicate that the corresponding operation is not available. For example, if a class sets iter() to None, the class is not iterable. (Contributed by Andrew Barnert and Ivan Levkivskyi in bpo-25958.)
Long sequences of repeated traceback lines are now abbreviated as "[Previous line repeated {count} more times]" (see traceback for an example). (Contributed by Emanuel Barry in bpo-26823.)
Import now raises the new exception ModuleNotFoundError (subclass of ImportError) when it cannot find a module. Code that currently checks for ImportError (in try-except) will still work. (Contributed by Eric Snow in bpo-15767.)
Class methods relying on zero-argument super() will now work correctly when called from metaclass methods during class creation. (Contributed by Martin Teichmann in bpo-23722.)

New Modules


The main purpose of the new secrets module is to provide an obvious way to reliably generate cryptographically strong pseudo-random values suitable for managing secrets, such as account authentication, tokens, and similar. Warning Note that the pseudo-random generators in the random module should NOT be used for security purposes. Use secrets on Python 3.6+ and os.urandom() on Python 3.5 and earlier.

Improved Modules


Exhausted iterators of array.array will now stay exhausted even if the iterated array is extended. This is consistent with the behavior of other mutable sequences. Contributed by Serhiy Storchaka in bpo-26492.


The new ast.Constant AST node has been added. It can be used by external AST optimizers for the purposes of constant folding. Contributed by Victor Stinner in bpo-26146.


Starting with Python 3.6 the asyncio module is no longer provisional and its API is considered stable. Notable changes in the asyncio module since Python 3.5.0 (all backported to 3.5.x due to the provisional status): * The get_event_loop() function has been changed to always return the currently running loop when called from couroutines and callbacks. (Contributed by Yury Selivanov in bpo-28613.) * The ensure_future() function and all functions that use it, such as loop.run_until_complete(), now accept all kinds of awaitable objects. (Contributed by Yury Selivanov.) * New run_coroutine_threadsafe() function to submit coroutines to event loops from other threads. (Contributed by Vincent Michel.) * New Transport.is_closing() method to check if the transport is closing or closed. (Contributed by Yury Selivanov.) * The loop.create_server() method can now accept a list of hosts. (Contributed by Yann Sionneau.) * New loop.create_future() method to create Future objects. This allows alternative event loop implementations, such as uvloop, to provide a faster asyncio.Future implementation. (Contributed by Yury Selivanov in bpo-27041.) * New loop.get_exception_handler() method to get the current exception handler. (Contributed by Yury Selivanov in bpo-27040.) * New StreamReader.readuntil() method to read data from the stream until a separator bytes sequence appears. (Contributed by Mark Korenberg.) * The performance of StreamReader.readexactly() has been improved. (Contributed by Mark Korenberg in bpo-28370.) * The loop.getaddrinfo() method is optimized to avoid calling the system getaddrinfo function if the address is already resolved. (Contributed by A. Jesse Jiryu Davis.) * The loop.stop() method has been changed to stop the loop immediately after the current iteration. Any new callbacks scheduled as a result of the last iteration will be discarded. (Contributed by Guido van Rossum in bpo-25593.) * Future.set_exception will now raise TypeError when passed an instance of the StopIteration exception. (Contributed by Chris Angelico in bpo-26221.) * New loop.connect_accepted_socket() method to be used by servers that accept connections outside of asyncio, but that use asyncio to handle them. (Contributed by Jim Fulton in bpo-27392.) TCP_NODELAY flag is now set for all TCP transports by default. (Contributed by Yury Selivanov in bpo-27456.) * New loop.shutdown_asyncgens() to properly close pending asynchronous generators before closing the loop. (Contributed by Yury Selivanov in bpo-28003.) * Future and Task classes now have an optimized C implementation which makes asyncio code up to 30% faster. (Contributed by Yury Selivanov and INADA Naoki in bpo-26081 and bpo-28544.)


The b2a_base64() function now accepts an optional newline keyword argument to control whether the newline character is appended to the return value. (Contributed by Victor Stinner in bpo-25357.)


The new cmath.tau (τ) constant has been added. (Contributed by Lisa Roach in bpo-12345, see PEP 628 for details.)
New constants: cmath.inf and cmath.nan to match math.inf and math.nan, and also cmath.infj and cmath.nanj to match the format used by complex repr. (Contributed by Mark Dickinson in bpo-23229.)


The new Collection abstract base class has been added to represent sized iterable container classes. (Contributed by Ivan Levkivskyi, docs by Neil Girdhar in bpo-27598.)
The new Reversible abstract base class represents iterable classes that also provide the reversed() method. (Contributed by Ivan Levkivskyi in bpo-25987.)
The new AsyncGenerator abstract base class represents asynchronous generators. (Contributed by Yury Selivanov in bpo-28720.)
The namedtuple() function now accepts an optional keyword argument module, which, when specified, is used for the module attribute of the returned named tuple class. (Contributed by Raymond Hettinger in bpo-17941.)
The verbose and rename arguments for namedtuple() are now keyword-only. (Contributed by Raymond Hettinger in bpo-25628.)
Recursive collections.deque instances can now be pickled. (Contributed by Serhiy Storchaka in bpo-26482.)


The ThreadPoolExecutor class constructor now accepts an optional thread_name_prefix argument to make it possible to customize the names of the threads created by the pool. (Contributed by Gregory P. Smith in bpo-27664.)


The contextlib.AbstractContextManager class has been added to provide an abstract base class for context managers. It provides a sensible default implementation for enter() which returns self and leaves exit() an abstract method. A matching class has been added to the typing module as typing.ContextManager. (Contributed by Brett Cannon in bpo-25609.)


The datetime and time classes have the new fold attribute used to disambiguate local time when necessary. Many functions in the datetime have been updated to support local time disambiguation. See Local Time Disambiguation section for more information. (Contributed by Alexander Belopolsky in bpo-24773.)
The datetime.strftime() and date.strftime() methods now support ISO 8601 date directives %G, %u and %V. (Contributed by Ashley Anderson in bpo-12006.)
The datetime.isoformat() function now accepts an optional timespec argument that specifies the number of additional components of the time value to include. (Contributed by Alessandro Cucci and Alexander Belopolsky in bpo-19475.)
The datetime.combine() now accepts an optional tzinfo argument. (Contributed by Alexander Belopolsky in bpo-27661.) decimal
New Decimal.as_integer_ratio() method that returns a pair (n, d) of integers that represent the given Decimal instance as a fraction, in lowest terms and with a positive denominator:
Decimal('-3.14').as_integer_ratio()(-157, 50)
(Contributed by Stefan Krah amd Mark Dickinson in bpo-25928.)


The default_format attribute has been removed from distutils.command.sdist.sdist and the formats attribute defaults to ['gztar']. Although not anticipated, any code relying on the presence of default_format may need to be adapted. See bpo-27819 for more details.
The upload command now longer tries to change CR end-of-line characters to CRLF. This fixes a corruption issue with sdists that ended with a byte equivalent to CR. (Contributed by Bo Bayles in bpo-32304.)


The new email API, enabled via the policy keyword to various constructors, is no longer provisional. The email documentation has been reorganized and rewritten to focus on the new API, while retaining the old documentation for the legacy API. (Contributed by R. David Murray in bpo-24277.) The email.mime classes now all accept an optional policy keyword. (Contributed by Berker Peksag in bpo-27331.)
The DecodedGenerator now supports the policy keyword.
There is a new policy attribute, message_factory, that controls what class is used by default when the parser creates new message objects. For the email.policy.compat32 policy this is Message, for the new policies it is EmailMessage. (Contributed by R. David Murray in bpo-20476.)


On Windows, added the 'oem' encoding to use CP_OEMCP, and the 'ansi' alias for the existing 'mbcs' encoding, which uses the CP_ACP code page. (Contributed by Steve Dower in bpo-27959.)


Two new enumeration base classes have been added to the enum module: Flag and IntFlags. Both are used to define constants that can be combined using the bitwise operators. (Contributed by Ethan Furman in bpo-23591.)
Many standard library modules have been updated to use the IntFlags class for their constants. The new value can be used to assign values to enum members automatically:
from enum import Enum, auto>>> class Color(Enum):... red = auto()...
blue = auto()... green = auto()...>>> list(Color)[, , ] faulthandler
On Windows, the faulthandler module now installs a handler for Windows exceptions: see faulthandler.enable(). (Contributed by Victor Stinner in bpo-23848.)


hook_encoded() now supports the errors argument. (Contributed by Joseph Hackman in bpo-25788.)


hashlib supports OpenSSL 1.1.0. The minimum recommend version is 1.0.2. (Contributed by Christian Heimes in bpo-26470.)
BLAKE2 hash functions were added to the module. blake2b() and blake2s() are always available and support the full feature set of BLAKE2. (Contributed by Christian Heimes in bpo-26798 based on code by Dmitry Chestnykh and Samuel Neves. Documentation written by Dmitry Chestnykh.)
The SHA-3 hash functions sha3_224(), sha3_256(), sha3_384(), sha3_512(), and SHAKE hash functions shake_128() and shake_256() were added. (Contributed by Christian Heimes in bpo-16113. Keccak Code Package by Guido Bertoni, Joan Daemen, Michaël Peeters, Gilles Van Assche, and Ronny Van Keer.)
The password-based key derivation function scrypt() is now available with OpenSSL 1.1.0 and newer. (Contributed by Christian Heimes in bpo-27928.)


HTTPConnection.request() and endheaders() both now support chunked encoding request bodies. (Contributed by Demian Brecht and Rolf Krahl in bpo-12319.)

idlelib and IDLE

The idlelib package is being modernized and refactored to make IDLE look and work better and to make the code easier to understand, test, and improve. Part of making IDLE look better, especially on Linux and Mac, is using ttk widgets, mostly in the dialogs. As a result, IDLE no longer runs with tcl/tk 8.4. It now requires tcl/tk 8.5 or 8.6. We recommend running the latest release of either.
‘Modernizing’ includes renaming and consolidation of idlelib modules. The renaming of files with partial uppercase names is similar to the renaming of, for instance, Tkinter and TkFont to tkinter and tkinter.font in 3.0. As a result, imports of idlelib files that worked in 3.5 will usually not work in 3.6. At least a module name change will be needed (see idlelib/README.txt), sometimes more. (Name changes contributed by Al Swiegart and Terry Reedy in bpo-24225. Most idlelib patches since have been and will be part of the process.)
In compensation, the eventual result with be that some idlelib classes will be easier to use, with better APIs and docstrings explaining them. Additional useful information will be added to idlelib when available.


Import now raises the new exception ModuleNotFoundError (subclass of ImportError) when it cannot find a module. Code that current checks for ImportError (in try-except) will still work. (Contributed by Eric Snow in bpo-15767.)
importlib.util.LazyLoader now calls create_module() on the wrapped loader, removing the restriction that importlib.machinery.BuiltinImporter and importlib.machinery.ExtensionFileLoader couldn’t be used with importlib.util.LazyLoader. importlib.util.cache_from_source(), importlib.util.source_from_cache(), and importlib.util.spec_from_file_location() now accept a path-like object.


The inspect.signature() function now reports the implicit .0 parameters generated by the compiler for comprehension and generator expression scopes as if they were positional-only parameters called implicit0. (Contributed by Jelle Zijlstra in bpo-19611.)
To reduce code churn when upgrading from Python 2.7 and the legacy inspect.getargspec() API, the previously documented deprecation of inspect.getfullargspec() has been reversed. While this function is convenient for single/source Python 2/3 code bases, the richer inspect.signature() interface remains the recommended approach for new code. (Contributed by Nick Coghlan in bpo-27172)


json.load() and json.loads() now support binary input. Encoded JSON should be represented using either UTF-8, UTF-16, or UTF-32. (Contributed by Serhiy Storchaka in bpo-17909.) logging
The new WatchedFileHandler.reopenIfNeeded() method has been added to add the ability to check if the log file needs to be reopened. (Contributed by Marian Horban in bpo-24884.)


The tau (τ) constant has been added to the math and cmath modules. (Contributed by Lisa Roach in bpo-12345, see PEP 628 for details.)


Proxy Objects returned by multiprocessing.Manager() can now be nested. (Contributed by Davin Potts in bpo-6766.)


See the summary of PEP 519 for details on how the os and os.path modules now support path-like objects.
scandir() now supports bytes paths on Windows.
A new close() method allows explicitly closing a scandir() iterator. The scandir() iterator now supports the context manager protocol. If a scandir() iterator is neither exhausted nor explicitly closed a ResourceWarning will be emitted in its destructor. (Contributed by Serhiy Storchaka in bpo-25994.) On Linux, os.urandom() now blocks until the system urandom entropy pool is initialized to increase the security. See the PEP 524 for the rationale.
The Linux getrandom() syscall (get random bytes) is now exposed as the new os.getrandom() function. (Contributed by Victor Stinner, part of the PEP 524)


pathlib now supports path-like objects. (Contributed by Brett Cannon in bpo-27186.) See the summary of PEP 519 for details.


The Pdb class constructor has a new optional readrc argument to control whether .pdbrc files should be read.


Objects that need new called with keyword arguments can now be pickled using pickle protocols older than protocol version 4. Protocol version 4 already supports this case. (Contributed by Serhiy Storchaka in bpo-24164.)


pickletools.dis() now outputs the implicit memo index for the MEMOIZE opcode. (Contributed by Serhiy Storchaka in bpo-25382.)


The pydoc module has learned to respect the MANPAGER environment variable. (Contributed by Matthias Klose in bpo-8637.)
help() and pydoc can now list named tuple fields in the order they were defined rather than alphabetically. (Contributed by Raymond Hettinger in bpo-24879.)


The new choices() function returns a list of elements of specified size from the given population with optional weights. (Contributed by Raymond Hettinger in bpo-18844.)


Added support of modifier spans in regular expressions. Examples: '(?i:p)ython' matches 'python' and 'Python', but not 'PYTHON'; '(?i)g(?-i:v)r' matches 'GvR' and 'gvr', but not 'GVR'. (Contributed by Serhiy Storchaka in bpo-433028.)
Match object groups can be accessed by getitem, which is equivalent to group(). So mo['name'] is now equivalent to'name'). (Contributed by Eric Smith in bpo-24454.)
Match objects now support index-like objects as group indices. (Contributed by Jeroen Demeyer and
Xiang Zhang in bpo-27177.)


Added set_auto_history() to enable or disable automatic addition of input to the history list. (Contributed by Tyler Crompton in bpo-26870.)


Private and special attribute names now are omitted unless the prefix starts with underscores. A space or a colon is added after some completed keywords. (Contributed by Serhiy Storchaka in bpo-25011 and bpo-25209.)


The shlex has much improved shell compatibility through the new punctuation_chars argument to control which characters are treated as punctuation. (Contributed by Vinay Sajip in bpo-1521950.) site
When specifying paths to add to sys.path in a .pth file, you may now specify file paths on top of directories (e.g. zip files). (Contributed by Wolfgang Langner in bpo-26587). sqlite3
sqlite3.Cursor.lastrowid now supports the REPLACE statement. (Contributed by Alex LordThorsen in bpo-16864.)


The ioctl() function now supports the SIO_LOOPBACK_FAST_PATH control code. (Contributed by Daniel Stokes in bpo-26536.)
The getsockopt() constants SO_DOMAIN, SO_PROTOCOL, SO_PEERSEC, and SO_PASSSEC are now supported. (Contributed by Christian Heimes in bpo-26907.)
The setsockopt() now supports the setsockopt(level, optname, None, optlen: int) form. (Contributed by Christian Heimes in bpo-27744.)
The socket module now supports the address family AFALG to interface with Linux Kernel crypto API. ALG*, SOL_ALG and sendmsg_afalg() were added. (Contributed by Christian Heimes in bpo-27744 with support from Victor Stinner.)
New Linux constants TCP_USER_TIMEOUT and TCP_CONGESTION were added. (Contributed by Omar Sandoval, issue:26273).


Servers based on the socketserver module, including those defined in http.server, xmlrpc.server and wsgiref.simple_server, now support the context manager protocol. (Contributed by Aviv Palivoda in bpo-26404.)
The wfile attribute of StreamRequestHandler classes now implements the io.BufferedIOBase writable interface. In particular, calling write() is now guaranteed to send the data in full. (Contributed by Martin Panter in bpo-26721.)


ssl supports OpenSSL 1.1.0. The minimum recommend version is 1.0.2. (Contributed by Christian Heimes in bpo-26470.)
3DES has been removed from the default cipher suites and ChaCha20 Poly1305 cipher suites have been added. (Contributed by Christian Heimes in bpo-27850 and bpo-27766.)
SSLContext has better default configuration for options and ciphers. (Contributed by Christian Heimes in bpo-28043.)
SSL session can be copied from one client-side connection to another with the new SSLSession class. TLS session resumption can speed up the initial handshake, reduce latency and improve performance (Contributed by Christian Heimes in bpo-19500 based on a draft by Alex Warhawk.)
The new get_ciphers() method can be used to get a list of enabled ciphers in order of cipher priority. All constants and flags have been converted to IntEnum and IntFlags. (Contributed by Christian Heimes in bpo-28025.)
Server and client-side specific TLS protocols for SSLContext were added. (Contributed by Christian Heimes in bpo-28085.)


A new harmonic_mean() function has been added. (Contributed by Steven D’Aprano in bpo-27181.)


struct now supports IEEE 754 half-precision floats via the 'e' format specifier. (Contributed by Eli Stevens, Mark Dickinson in bpo-11734.)


subprocess.Popen destructor now emits a ResourceWarning warning if the child process is still running. Use the context manager protocol (with proc: ...) or explicitly call the wait() method to read the exit status of the child process. (Contributed by Victor Stinner in bpo-26741.) The subprocess.Popen constructor and all functions that pass arguments through to it now accept encoding and errors arguments. Specifying either of these will enable text mode for the stdin, stdout and stderr streams. (Contributed by Steve Dower in bpo-6135.)


The new getfilesystemencodeerrors() function returns the name of the error mode used to convert between Unicode filenames and bytes filenames. (Contributed by Steve Dower in bpo-27781.) On Windows the return value of the getwindowsversion() function now includes the platform_version field which contains the accurate major version, minor version and build number of the current operating system, rather than the version that is being emulated for the process (Contributed by Steve Dower in bpo-27932.)
submitted by Marco_Diaz_SVFOE to EasyLearnProgramming [link] [comments]

Image signatures are necessary, but not adequate. Here's why you can't rely on them alone

From: Signed, Sealed, Deployed: Why Signatures Aren't Enough
Red Hat recently blogged about their progress in adding support for container image signing, a particularly interesting and most welcome aspect of the design is the way that the binary signature file can be decoupled from the registry and distributed separately. The blog makes interesting reading and I’d strongly recommending reading through it, I’m sure you’ll appreciate the design. And of course code is available on line.
Red Hat is, along with the other Linux distributors, well versed in the practice of signing software components to allow end users to verify that they are running authentic code and is in the process of extending this support to container images. The approach described is different to that taken previously by Docker Inc. however rather than comparing the two approaches I wanted to talk at a high level about the benefits of image signing along with some commentary about trust.
In the physical world we are all used to using our signature to confirm our identity. Probably the most common example is when we are signing a paper check or using an electronic signature pad during a sales transaction. How many times have you signed your signature so quickly that you do not even recognize it yourself? How many times in recent memory has a cashier or server compared the signature written with that on the back of your credit card? In my experience that check is likely to happen one out of every ten times and even in those cases that check is little more than a token gesture and the two signatures may not have matched.
That leads me to the first important observation: a signature mechanism is only useful if it is checked. Obviously when vendors such as Docker Inc, Red Hat and others implement an image signing and validation system, the enforcement will be built into to all layers, so that in one example a Red Hat delivered image will be validated by a Red Hat provided Docker runtime to ensure it’s signed by a valid source.
However it’s more likely that the images that you deploy in your enterprise won’t just be the images downloaded from a registry, but instead images built on top of these images or perhaps even built from scratch, so for image signing to provide the required level of security all images created within your enterprise should also be signed and have those signatures validated before the image is deployed. Some early users of image signing that we have talked to have used image signing less as a way of tracking provenance of images but instead as a method to show that an image has not been modified between leaving the CI/CD pipeline and being deployed on their container host.
Before we dig into the topic of image signing it’s worth discussing what a signature actually represents. The most common example of signatures that we see in our day to day life is in our web browsers where we look for the little green padlock in address bar that indicates that the connection to the web server from our browser is encrypted but most importantly it confirms that you are talking to the expected website.
The use of TLS/SSL certificates allows your browser to validate that when you connect to the content displayed actually came from So in this example the signature was used to confirm the source of the (web) content. Over many years we have been trained NOT to type our credit card details into a site that is NOT delivered through HTTPS. But that does not mean that you would trust your credit card details to any site that uses HTTPS.
The same principle applies to the use of image signatures. If you download an image signed by Red Hat, Docker Inc, or any other vendor, you can be assured that the image did come from this vendor. The level of confidence you have in the contents of the image is based on the level of trust you already have with the vendor. For example you are likely not to run an image signed by l33thackerz even though it may include a valid signature. As enterprises move to a DevOps model with containers we’re seeing a new software supply chain, which often begins with a base image pulled from DockerHub or a vendor registry. This base image may be modified by the operations team to include extra packages or to customize specific configuration files. The resulting image is then published in the local registry to be used by the development team as the base image for their application container. In many organizations we are starting to see other participants in this supply chain, for example a middleware team may publish an image containing an application server that is in turn used by an application team.
For the promise of image signing to be fulfilled, at each stage of this supply chain each team must sign the image to ensure that the ‘chain of custody’ can be validated throughout the software development lifecycle. As we covered previously those signatures only serves to prove the source of an image, during any point in the supply chain from the original vendor of the base image all the way through the development process the images may be modified. At any step in the supply chain a mistake may be made, an outdated package that contains known bugs of vulnerabilities may be used, an insecure configuration option in an application’s configuration file, or perhaps secrets such as passwords or API keys may be stored in the image.
Signing an image will not prevent insecure or otherwise non-compliant images from being deployed, however as part of a post mortem it will provide a way of tracking down when the vulnerability or bug was introduced.
During each stage of the supply chain detailed checks should be performed on the image to ensure that the image complies with your site specific policies.
These policies could cover security, starting with the ubiquitous CVE scan but then going further to analyze configuration of key security components. For example, you could have the latest version of the Apache web server but have configured the wrong set of TLS Ciphers suites leading to insecure communication. In addition to security, your policies could cover application specific configurations to comply with best practices or to enable consistency and predictability.
Anchore’s goal is to provide a toolset that allows developers, operations, and security teams to maintain full visibility of the ‘chain of custody’ as containers move through the development lifecycle, while providing the visibility, predictability, and control needed for production deployment.
With Anchore’s tools the analysis and policy evaluation could be run during each stage of the supply chain allowing the signatures to attest to both the source of the image and also the compliance of the image’s contents.
In summary, we believe that image signing is an important part of the security and integrity of your software supply chain however signatures alone will not ensure the integrity of your systems.
submitted by weighanchore to devops [link] [comments]

Otp Is A Perfect Cipher Pt 1 Solution - Applied Cryptography Encryption Technique : One time Pad with example - YouTube One Time Pad (Vernam Cipher) Explained with Solved Example ... 2 1 Information theoretic security and the one time pad 19 min 2-4 One-Time Pad (OTP) 相關加密法 Vernam Cipher (One-Time Pad) - YouTube 02 XOR Cipher Characters and Binary Asymmetric cipher An Unbreakable Cipher: One Time Pad Part Two Number of keys for ideal block cipher (Statistics Examples 8)

The OTP, or One-Time Pad, also known as the Vernam cipher, is, according to the NSA, "perhaps one of the most important in the history of cryptography." If executed correctly, it provides uncrackable encryption. It has an interesting and storied history, dating back to the 1880s, when Frank Miller, a Yale graduate, invented the idea of the OTP. Communication was expensive and difficult in the ... It is called the Vernam cipher or one-time pad. The worth of all other ciphers is based on computational security. If a cipher is computationally secure this means the probability of cracking the encryption key using current computational technology and algorithms within a reasonable time is supposedly extremely small, yet not impossible. In theory, every cryptographic algorithm except for the ... A Unary Cipher with Advantages over the Vernam Cipher V ... smarter than their designers. By contrast, the Vernam One-Time-Pad cipher is free from these vulnerabilities, which is why it is the cipher of choice against such perceived threats. Alas, Vernam key management is very exacting and cumbersome, and it is also plagued by a serious authentication vulnerability. It is therefore of some ... The one-time pad is famous as being the only completely unbreakable cipher . Assuming that the secret pad is randomly generated, not-reused (hence "one-time pad"), and not leaked, it is impossible to learn a single bit of the plaintext of a message from a ciphertext. The one-time-pad is one of the best cryptography protocols when the work must be done by hand, without the aid of a computer. One-time-pad is an encryption process that uses random key, that changes from session to session. The random nature of the key and the fact that each session uses a unique key make it harder to ... One-Time Pad Or Vernam Cipher. Sayed Mahdi Mohammad Hasanzadeh [email protected] Spring 2004 OTP System. The one-time pad, which is a provably secure cryptosystem, was developed by Gilbert Vernam in 1918. The message is represented as a binary string (a sequence of 0s and 1s using a coding mechanism such as ASCII coding. This is how we generate a one-time pad of any given size: $ ./ generate test.key -s 1024 $ ls -l test.key -rw-r--r-- 1 user group 1024 2010-02-17 01:23 test.key $ The alternative for the lazy is to pass the name of the file we want to encrypt. A one-time pad of the exact same size will be generated. We’ll use the -f flag this time to ...

[index] [17749] [6066] [3131] [26786] [8612] [2265] [1892] [20695] [24200] [27740]

Otp Is A Perfect Cipher Pt 1 Solution - Applied Cryptography

The second part of my series on the One Time Pad, where I demonstrate how to make your own mathematically unbreakable cyphers for private communications (assuming you have a lot of time on your ... The one-time pad Journey into ... 2:56. Lecture 3: Stream Ciphers, Random Numbers and the One Time Pad by Christof Paar - Duration: 1:29:39. Introduction to Cryptography by Christof Paar 118,849 ... فك التشفير باستخدام one time pad - Duration: 8:39. ... Monoalphabetic Cipher Encryption / Decryption - شرح بالعربي - Duration: 2:53. iTeam Academy 43,169 views. 2:53 ... Lecture 3: Stream Ciphers, Random Numbers and the One Time Pad by Christof Paar - Duration: 1:29:39. Introduction to Cryptography by Christof Paar 127,244 views. 1:29:39. Cryptography 101 ... (2 of 4) Work-through tutorial on creating a cipher system in Excel 2010 using binary XOR. The next video is starting stop. Loading... Watch Queue The Vernam cipher (aka the one-time pad, or Vigenere OTP) is the only encryption algorithm with perfect security, meaning it is unbreakable. The general conc... 📚📚📚📚📚📚📚📚 GOOD NEWS FOR COMPUTER ENGINEERS INTRODUCING 5 MINUTES ENGINEERING 🎓🎓🎓🎓🎓🎓🎓🎓 SUBJECT :- Discrete Mathematics (DM) Theory Of Computation (TOC ... Classical Encryption Technique One time Pad GTU SEM 6 Information Security CSE /IT For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Lectures by Walter Lewin. They will make you ♥ Physics. Recommended for you