neato compression of key-value data

compression of key-value data
npm install efrt

if your data looks like this:

var data = {
  bedfordshire: 'England',
  aberdeenshire: 'Scotland',
  buckinghamshire: 'England',
  argyllshire: 'Scotland',
  bambridgeshire: 'England',
  cheshire: 'England',
  ayrshire: 'Scotland',
  banffshire: 'Scotland'

you can compress it like this:

var str = efrt.pack(data);

then _very!_ quickly flip it back into:

var obj = efrt.unpack(str);

efrtpacks category-type data into a very compressed prefix trieformat, so that redundancies in the data are shared, and nothing is repeated.

By doing this clever-stuff ahead-of-time, efrtlets you ship much moredata to the client-side, without hassle or overhead.

The whole library is 8kb, the unpack half is barely 2kb.

it is based on:



  • get a js object into very compact form
  • reduce filesize/bandwidth a bunch
  • ensure the unpacking time is negligible
  • keep word-lookups on critical-path
var efrt = require('efrt')

var foods = {
  strawberry: 'fruit',
  blueberry: 'fruit',
  blackberry: 'fruit',
  tomato: ['fruit', 'vegetable'],
  cucumber: 'vegetable',
  pepper: 'vegetable'
var str = efrt.pack(foods);

var obj=efrt.unpack(str)
//['fruit', 'vegetable']

or, an Array:

if you pass it an array of strings, it just creates an object with truevalues:

const data = [
const packd = efrt.pack(data)
// true¦a6dec4febr3j1ma0nov4octo5sept4;rch,y;an1u0;ly,ne;uary;em0;ber;pril,ugust
const sameArray = Object.keys(efrt.unpack(packd))
// same thing !

Reserved characters

the keys of the object are normalized. Spaces/unicode are good, but numbers, case-sensitivity, and some punctuation(semicolon, comma, exclamation-mark) are not (yet) supported.

specialChars = new RegExp('[0-9A-Z,;!:|¦]')

efrtis built-for, and used heavily in compromise, to expand the amount of data it can ship onto the client-side. If you find another use for efrt, please drop us a line🎈


efrtis tuned to be very quick to unzip. It is O(1) to lookup. Packing-up the data is the slowest part, which is usually fine:

var compressed = efrt.pack(skateboarders);//1k words (on a macbook)
var trie = efrt.unpack(compressed)
// unpacking-step: 5.1ms

trie.hasOwnProperty('tony hawk')
// cached-lookup: 0.02ms


efrtwill pack filesize down as much as possible, depending upon the redundancy of the prefixes/suffixes in the words, and the size of the list.

  • list of countries - 1.5k -> 0.8k(46% compressed)
  • all adverbs in wordnet - 58k -> 24k(58% compressed)
  • all adjectives in wordnet - 265k -> 99k(62% compressed)
  • all nouns in wordnet - 1,775k -> 692k(61% compressed)

but there are some things to consider:

  • bigger files compress further (see 🎈 birthday problem)
  • using efrt will reduce gains from gzip compression, which most webservers quietly use
  • english is more suffix-redundant than prefix-redundant, so non-english words may benefit from other styles

Assuming your data has a low _category-to-data ratio_, you will hit-breakeven with at about 250 keys. If your data is in the thousands, you can very be confident about saving your users some considerable bandwidth.



<script src="https://unpkg.com/efrt@latest/builds/efrt.min.js"></script>
  var smaller=efrt.pack(['larry','curly','moe'])
  var trie=efrt.unpack(smaller)

if you're doing the second step in the client, you can load just the unpack-half of the library(~3k):

npm install efrt-unpack
<script src="https://unpkg.com/efrt@latest/builds/efrt-unpack.min.js"></script>
  var trie=unpack(compressedStuff);
  trie.hasOwnProperty('miles davis');

Thanks to John Resigfor his fun trie-compression poston his blog, and Wiktor Jakubczycfor his performance analysis work






  • efrt-unpack

    compressedtrie datastructure the unpack half of the efrt library(npmjs.com/package/efrt). See that...

    1 年前


扫码加入 JavaScript 社区