Categories
Uncategorized

🚪 revdoor

revdoor is a single-file C++ library for visiting revolving door combinations.

The combinations without replacement generator implements Algorithm R from TAOCP 7.2.1.3 [1]. The combinations with replacement generator implements the same algorithm, modified to support replacement.

The algorithms visit combinations by indicating at most two pairs of items to swap in and out on each iteration.

The source code is available on GitHub:
https://github.com/dstein64/revdoor

Categories
Uncategorized

⏲️ vim-startuptime and 🖼️ vim‑win

I recently implemented two Vim plugins (they also work on Neovim).

vim-startuptime

vim-startuptime is a plugin for viewing Vim startup event timing—reported in milliseconds. This can be helpful when trying to modify your configuration to improve Vim’s startup time.

  • Launch vim-startuptime with :StartupTime.
  • Press <space> on events to get additional information.
  • Press <enter> on sourcing events to load the corresponding file in a new split.
  • Access documentation with :help vim-startuptime.

The source code—along with installation instructions—is available on GitHub:
https://github.com/dstein64/vim-startuptime

vim-win

vim-win is a plugin for managing windows, including 1) selecting windows, 2) swapping window buffers, and 3) resizing windows. Full functionality requires vim>=8.2 or nvim>=0.4.0.

  • Enter vim-win with <leader>w or :Win.
  • Arrows or hjkl keys are used for movement.
  • Change windows with movement keys or numbers.
  • Hold <shift> and use movement keys to resize the active window.
  • Press s or S followed by a movement key or window number, to swap buffers.
  • Press ? to show a help message.
  • Press <esc> to leave vim-win.
  • Access documentation with :help vim-win.

The source code—along with installation instructions—is available on GitHub:
https://github.com/dstein64/vim-win

Categories
Uncategorized

🎨 gifcast Color Profiles

gifcast—discussed in a prior post—now supports color profile selection.

The following animated GIFs—generated with gifcast—show a sample of available profiles.

Here is the asciinema cast file used to generate the animated GIFs: profile_demo.cast

Categories
Uncategorized

▶️ gifcast

I implemented gifcast, a web page for converting asciinema casts to animated GIFs. Here’s the link:
https://dstein64.github.io/gifcast/

The JavaScript source code is available on GitHub:
https://github.com/dstein64/gifcast

The example below was generated with gifcast.

Here is the asciinema cast file used to generate the animated GIF: gifcast.cast

Categories
Uncategorized

Compressing VGG for Style Transfer

32-bit float (no quantization) 8-bit 7-bit
6-bit 5-bit 4-bit
3-bit 2-bit 1-bit

I recently implemented pastiche—discussed in a prior post—for applying neural style transfer. I encountered a size limit when uploading the library to PyPI, as a package cannot exceed 60MB. The 32-bit floating point weights for the underlying VGG model [1] were contained in an 80MB file. My package was subsequently approved for a size limit increase that could accommodate the VGG weights as-is, but I was still interested in compressing the model.

Various techniques have been proposed for compressing neural networks—including distillation [2] and quantization [3,4]—which have been shown to work well in the context of classification. My problem was in the context of style transfer, so I was not sure how model compression would impact the results.

Experiments

I decided to experiment with weight quantization, using a scheme where I could store the quantized weights on disk, and then uncompress the weights to full 32-bit floats at runtime. This quantization scheme would allow me to continue using my existing code after the model is loaded. I am not targeting environments where memory is a constraint, so I was not particularly interested in approaches that would also reduce the model footprint at runtime. I used kmeans1d—discussed in a prior post—for quantizing each layer’s weights.