# Apache Avro

</td></tr>
Original author(s) Apache 1.8.2 / May 20, 2017 {{#property:P1324}} active Apache License 2.0 avro.apache.org

Apache Avro is a remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format. Its primary use is in Apache Hadoop, where it can provide both a serialization format for persistent data, and a wire format for communication between Hadoop nodes, and from client programs to the Hadoop services.

It is similar to Thrift and Protocol Buffers, but does not require running a code-generation program when a schema changes (unless desired for statically-typed languages).

Spark SQL can access Avro as a data source. [Source 1].

## Comparison with other systems

Avro provides:

• Rich data structures.
• A compact, fast, binary data format.
• A container file, to store persistent data.
• Remote procedure call (RPC).
• Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages.

Avro provides functionality similar to systems such as Thrift, Protocol Buffers, etc. Avro differs from these systems in the following fundamental aspects.

• Dynamic typing: Avro does not require that code be generated. Data is always accompanied by a schema that permits full processing of that data without code generation, static datatypes, etc. This facilitates construction of generic data-processing systems and languages.
• Untagged data: Since the schema is present when data is read, considerably less type information need be encoded with data, resulting in smaller serialization size.
• No manually-assigned field IDs: When a schema changes, both the old and new schema are always present when processing data, so differences may be resolved symbolically, using field names.

## Schemas

Avro relies on schemas. When Avro data is read, the schema used when writing it is always present. This permits each datum to be written with no per-value overheads, making serialization both fast and small. This also facilitates use with dynamic, scripting languages, since data, together with its schema, is fully self-describing. When Avro data is stored in a file, its schema is stored with it, so that files may be processed later by any program. If the program reading the data expects a different schema this can be easily resolved, since both schemas are present.

When Avro is used in RPC, the client and server exchange schemas in the connection handshake. (This can be optimized so that, for most calls, no schemas are actually transmitted.) Since both client and server both have the other's full schema, correspondence between same named fields, missing fields, extra fields, etc. can all be easily resolved.

Avro schemas are defined with JSON . This facilitates implementation in languages that already have JSON libraries.

## Schema Declaration

A Schema is represented in JSON by one of:

• A JSON string, naming a defined type.
• A JSON object, of the form: {"type": "typeName" ...attributes...}
• A JSON array, representing a union of embedded types.

### Primitive Types

The set of primitive type names is:

• null: no value
• boolean: a binary value
• int: 32-bit signed integer
• long: 64-bit signed integer
• float: single precision (32-bit) IEEE 754 floating-point number
• double: double precision (64-bit) IEEE 754 floating-point number
• bytes: sequence of 8-bit unsigned bytes
• string: unicode character sequence

Primitive types have no specified attributes. Primitive type names are also defined type names. Thus, for example, the schema "string" is equivalent to: {"type": "string"}

### Complex Types

Avro supports six kinds of complex types: records, enums, arrays, maps, unions and fixed.

#### Records

Records use the type name "record" and support three attributes:

• name: a JSON string providing the name of the record (required).
• namespace, a JSON string that qualifies the name;
• doc: a JSON string providing documentation to the user of this schema (optional).
• aliases: a JSON array of strings, providing alternate names for this record (optional).
• fields: a JSON array, listing fields (required). Each field is a JSON object with the following attributes:
1. name: a JSON string providing the name of the field (required), and
2. doc: a JSON string describing this field for users (optional).
3. type: A JSON object defining a schema, or a JSON string naming a record definition (required).
4. default: A default value for this field, used when reading instances that lack this field (optional).
• order: specifies how this field impacts sort ordering of this record (optional). Valid values are "ascending" (the default), "descending", or "ignore". For more details on how this is used, see the the sort order section below.
• aliases: a JSON array of strings, providing alternate names for this field (optional).

For example:

{
"type": "record",
"name": "LongList",
"aliases": ["LinkedLongs"],                      // old name for this
"fields" : [
{"name": "value", "type": "long"},             // each element has a long
{"name": "next", "type": ["null", "LongList"]} // optional next element
]
}

#### Enums

Enums use the type name "enum" and support the following attributes:

• name: a JSON string providing the name of the enum (required).
• namespace, a JSON string that qualifies the name;
• aliases: a JSON array of strings, providing alternate names for this enum (optional).
• doc: a JSON string providing documentation to the user of this schema (optional).
• symbols: a JSON array, listing symbols, as JSON strings (required). All symbols in an enum must be unique; duplicates are prohibited. Every symbol must match the regular expression [A-Za-z_][A-Za-z0-9_]* (the same requirement as for names).

For example:

{ "type": "enum",
"name": "Suit",
"symbols" : ["SPADES", "HEARTS", "DIAMONDS", "CLUBS"]
}

#### Arrays

Arrays use the type name "array" and support a single attribute:

• items: the schema of the array's items.

For example:

{"type": "array", "items": "string"}

#### Maps

Maps use the type name "map" and support one attribute:

• values: the schema of the map's values.

Map keys are assumed to be strings. For example:

{"type": "map", "values": "long"}

#### Unions

Unions, as mentioned above, are represented using JSON arrays. For example, ["null", "string"] declares a schema which may be either a null or string. Unions may not contain more than one schema with the same type, except for the named types record, fixed and enum. For example, unions containing two array types or two map types are not permitted, but two types with different names are permitted. (Names permit efficient resolution when reading and writing unions.) Unions may not immediately contain other unions.

#### Fixed

Fixed uses the type name "fixed" and supports two attributes:

• name: a string naming this fixed (required).
• namespace, a string that qualifies the name;
• aliases: a JSON array of strings, providing alternate names for this enum (optional).
• size: an integer, specifying the number of bytes per value (required).

For example:

{"type": "fixed", "size": 16, "name": "md5"}

## Object Container Files

Avro includes a simple object container file format. A file has a schema, and all objects stored in the file must be written according to that schema, using binary encoding. Objects are stored in blocks that may be compressed. Syncronization markers are used between blocks to permit efficient splitting of files for MapReduce processing. Files may include arbitrary user-specified metadata. A file consists of:

• A file header, followed by
• one or more file data blocks.

• Four bytes, ASCII 'O', 'b', 'j', followed by 1.
• file metadata, including the schema.
• The 16-byte, randomly-generated sync marker for this file.

File metadata is written as if defined by the following map schema:

{"type": "map", "values": "bytes"}

• avro.schema contains the schema of objects stored in the file, as JSON data (required).
• avro.codec the name of the compression codec used to compress blocks, as a string. Implementations are required to support the following codecs: "null" and "deflate". If codec is absent, it is assumed to be "null". The codecs are described with more detail below.

A file header is thus described by the following schema:

{"type": "record", "name": "org.apache.avro.file.Header",
"fields" : [
{"name": "magic", "type": {"type": "fixed", "name": "Magic", "size": 4}},
{"name": "meta", "type": {"type": "map", "values": "bytes"}},
{"name": "sync", "type": {"type": "fixed", "name": "Sync", "size": 16}},
]
}

A file data block consists of:

A long indicating the count of objects in this block. A long indicating the size in bytes of the serialized objects in the current block, after any codec is applied The serialized objects. If a codec is specified, this is compressed by that codec. The file's 16-byte sync marker. Thus, each block's binary data can be efficiently extracted or skipped without deserializing the contents. The combination of block size, object counts, and sync markers enable detection of corrupt blocks and help ensure data integrity.

## Protocol Declaration

Avro protocols describe RPC interfaces. Like schemas, they are defined with JSON text. A protocol is a JSON object with the following attributes:

• protocol, a string, the name of the protocol (required);
• namespace, an optional string that qualifies the name;
• doc, an optional string describing this protocol;
• types, an optional list of definitions of named types (records, enums, fixed and errors). An error definition is just like a record definition except it uses "error" instead of "record". Note that forward references to named types are not permitted.
• messages, an optional JSON object whose keys are message names and whose values are objects whose attributes are described below. No two messages may have the same name.

The name and namespace qualification rules defined for schema objects apply to protocols as well.

### Messages

A message has attributes:

• a doc, an optional description of the message,
• a request, a list of named, typed parameter schemas (this has the same form as the fields of a record declaration);
• a response schema;
• an optional union of declared error schemas. The effective union has "string" prepended to the declared union, to permit transmission of undeclared "system" errors. For example, if the declared error union is ["AccessError"], then the effective union is ["string", "AccessError"]. When no errors are declared, the effective error union is ["string"]. Errors are serialized using the effective union; however, a protocol's JSON declaration contains only the declared union.
• an optional one-way boolean parameter.

A request parameter list is processed equivalently to an anonymous record. Since record field lists may vary between reader and writer, request parameters may also differ between the caller and responder, and such differences are resolved in the same manner as record field differences. The one-way parameter may only be true when the response type is "null" and no errors are listed.

### Sample Protocol

For example, one may define a simple HelloWorld protocol with:

{
"namespace": "com.acme",
"protocol": "HelloWorld",
"doc": "Protocol Greetings",

"types": [
{"name": "Greeting", "type": "record", "fields": [
{"name": "message", "type": "string"}]},
{"name": "Curse", "type": "error", "fields": [
{"name": "message", "type": "string"}]}
],

"messages": {
"hello": {
"doc": "Say hello.",
"request": [{"name": "greeting", "type": "Greeting" }],
"response": "Greeting",
"errors": ["Curse"]
}
}
}

## Serializing and deserializing

Data in Avro might be stored with its corresponding schema, meaning a serialized item can be read without knowing the schema ahead of time.

Example serialization and deserialization code in Python[Source 2]. Serialization:

import avro.schema

schema = avro.schema.parse(open("user.avsc").read())  # need to know the schema to write

writer = DataFileWriter(open("users.avro", "w"), DatumWriter(), schema)
writer.append({"name": "Alyssa", "favorite_number": 256})
writer.append({"name": "Ben", "favorite_number": 7, "favorite_color": "red"})
writer.close()

File "users.avro" will contain the schema in JSON and a compact binary representation of the data:[Source 3]:

\$ od -c users.avro
0000000    O   b   j 001 004 026   a   v   r   o   .   s   c   h   e   m
0000020    a 272 003   {   "   t   y   p   e   "   :       "   r   e   c
0000040    o   r   d   "   ,       "   n   a   m   e   s   p   a   c   e
0000060    "   :       "   e   x   a   m   p   l   e   .   a   v   r   o
0000100    "   ,       "   n   a   m   e   "   :       "   U   s   e   r
0000120    "   ,       "   f   i   e   l   d   s   "   :       [   {   "
0000140    t   y   p   e   "   :       "   s   t   r   i   n   g   "   ,
0000160        "   n   a   m   e   "   :       "   n   a   m   e   "   }
0000200    ,       {   "   t   y   p   e   "   :       [   "   i   n   t
0000220    "   ,       "   n   u   l   l   "   ]   ,       "   n   a   m
0000240    e   "   :       "   f   a   v   o   r   i   t   e   _   n   u
0000260    m   b   e   r   "   }   ,       {   "   t   y   p   e   "   :
0000300        [   "   s   t   r   i   n   g   "   ,       "   n   u   l
0000320    l   "   ]   ,       "   n   a   m   e   "   :       "   f   a
0000340    v   o   r   i   t   e   _   c   o   l   o   r   "   }   ]   }
0000360  024   a   v   r   o   .   c   o   d   e   c  \b   n   u   l   l
0000400   \0 211 266   / 030 334   ˪  **   P 314 341 267 234 310   5 213
0000420    6 004   ,  \f   A   l   y   s   s   a  \0 200 004 002 006   B
0000440    e   n  \0 016  \0 006   r   e   d 211 266   / 030 334   ˪  **
0000460    P 314 341 267 234 310   5 213   6
0000471


Deserialization::

reader = DataFileReader(open("users.avro", "r"), DatumReader())  # no need to know the schema to read
print user
reader.close()

This outputs:

{u'favorite_color': None, u'favorite_number': 256, u'name': u'Alyssa'}
{u'favorite_color': u'red', u'favorite_number': 7, u'name': u'Ben'}


Data in Avro might be stored with its corresponding schema, meaning a serialized item can be read without knowing the schema ahead of time.

Example serialization and deserialization code in Python[Source 4] Serialization:

## Languages with APIs

Though theoretically any language could use Avro, the following languages have APIs written for them:

## Avro IDL

In addition to supporting JSON for type and protocol definitions, Avro includes experimental support for an alternative interface description language (IDL) syntax known as Avro IDL. Previously known as GenAvro, this format is designed to ease adoption by users familiar with more traditional IDLs and programming languages, with a syntax similar to C/C++, Protocol Buffers and others.

## Sources

1. Article // Dataconomy. [2019]. URL:https://dataconomy.com/2016/04/3-reasons-hadoop-analytics-big-deal/ (Retrieved: 30.12.2018)
2. Docs // AvroApache. [2019]. URL:https://avro.apache.org/docs/current/gettingstartedpython.html (Retrieved: 30.12.2018)
3. Docs // AvroApache. [2018–2018]. URL:https://avro.apache.org/docs/current/spec.html#Data+Serialization (Retrieved: 30.12.2018)
4. Docs // AvroApache. [2018–2018]. URL:https://avro.apache.org/docs/current/gettingstartedpython.html (Retrieved: 30.12.2018)