UDP proxy: Don't attempt to dissect dgram into records when dropping

To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.

However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
  implementations of suitable map interfaces, it only counts how
  many times a record of a given _size_ has been dropped, and
  stops dropping further records of that size once the configurable
  threshold is passed. Of course, this is not fail-proof, but a
  good enough approximation for the proxy, and it allows to use
  an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
  When deciding whether it is allowed to drop a datagram, it
  uses the total datagram size as a lookup index into the map
  counting the number of times a package has been dropped. However,
  when updating this map, the UDP proxy traverses the datagram
  record by record, and updates the mapping at the level of record
  lengths.

Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.

This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.

There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
This commit is contained in:
Hanno Becker 2019-06-04 13:04:28 +01:00
parent 16772566d9
commit bcf97ec18c

View file

@ -598,32 +598,17 @@ int send_delayed()
static unsigned char dropped[2048] = { 0 };
#define DROP_MAX 2
/*
* OpenSSL groups packets in a datagram the first time it sends them, but not
* when it resends them. Count every record as seen the first time.
*/
/* We only drop packets at the level of entire datagrams, not at the level
* of records. In particular, if the peer changes the way it packs multiple
* records into a single datagram, we don't necessarily count the number of
* times a record has been dropped correctly. However, the only known reason
* why a peer would change datagram packing is disabling the latter on
* retransmission, in which case we'd drop involved records at most
* DROP_MAX + 1 times. */
void update_dropped( const packet *p )
{
size_t id = p->len % sizeof( dropped );
const unsigned char *end = p->buf + p->len;
const unsigned char *cur = p->buf;
size_t len = ( ( cur[11] << 8 ) | cur[12] ) + 13;
++dropped[id];
/* Avoid counting single record twice */
if( len == p->len )
return;
while( cur < end )
{
len = ( ( cur[11] << 8 ) | cur[12] ) + 13;
id = len % sizeof( dropped );
++dropped[id];
cur += len;
}
}
int handle_message( const char *way,