Few words about me:

From very childhood I loved computers, electronics, lasers, sulfuric acid and liquid nitrogen. I always wanted to be making microchips, UAVs and see nuclear explosion.

Now I am doing software engineering and in the spare time - some microelectronics and physics/chemistry experiments.

I live and work in Russia, Moscow.

Weekend laser galvoscanner fun

Finally found some time to play with galvoscanner with 1W RGB laser... and caused some laser damage

May 7, 2018

PaperBack - proper way of storing information on paper

Some 20 years ago I had an idea of storing data on paper - for backups and such. Never ended up implementing it. But luckily someone did - and since 2007 we've got PaperBack made by Oleh Yuschuk.

While playing with it I was able to store ~500 KiB of data on a single side of A4, which could already have some practical use. This density is achieved at 300dpi data density, 80% dot scale (recommended value of 70% gave higher error rate) and 20% for ECC correction. For reliable recovery scanned image had to be slightly sharpened using Gimp2/unsharp mask, but it feels like this is the limit (ECC had to recover ~10% of errors). On 200/240dpi data density everything is much more reliable.

One can for example take a photo of the sheet using film camera and get data microfilms at home ))) Also, this data is easy to read even in distant future and does not depend on spefic reading hardware, so even aliens or humans 1000 years in the future who find a timecapsure with it would be able to read it...

Here is how data looks at 80dpi:


Now data at 300dpi, maximum for 600dpi printer:

Even closer (square side is 2.97мм). One can see that using less than 2*2 pixels for 1 bit of data would require different recovery approach due to very high rate of errors which will be pattern-dependent. Paper fibers would also cause some issues as higher data densities.

December 18, 2017

Olympus UPlanApo 10x0.4 WI - ultimate fun microscope lens

Apparently I am a fan of water immersion. Last time when I've got 10x0.3WI lens I thought I am set for life. But recently I've spotted even more interesting lens on ebay: Olympus UPlanApo 10x0.4 WI. That would be my first apochromatic microscope lens. It's very large field of view and 0.4 aperture makes it ~1.8x harder to achieve uniform focus over the field comparing to 10x0.3 WI. But the information density is again ~1.8 times higher.

In the center of the frame new lens (on the left) is sharper due to higher aperture (0.4 vs 0.3):

At the edge we can see that lateral chromatic aberations are lower for APO lens (who knew?). Remainng lateral CA is probably caused by microscope's tube lens. I surely correct lateral chromatic aberration in software, but this lens makes it easier and cleaner.
December 11, 2017

Sony a7II: Time for Full Frame (and some fake RAW controversy)

Finally I've bought my first full-frame digital camera - Sony A7II, on ebay as usual. While everybody were excited about A7RIII - I was quite disappointed: same sensor as in A7RII and still not solved fake RAW issue (it's there for all Sony camera's though). Why it is called fake RAW? Sony cameras has noise reduction algorithm which you cannot disable. It kicks in at long exposures and is applied even in RAW(!!!!), which completely ruins whole idea of RAW files (=direct stream of data from the imaging sensor for further advanced processing). The most obvious effect of this forced "noise reduction" is erasing 2/3 of the stars at the photos of night sky. You can read very detailed research of the problem here and you can also sign petition to Sony to finally fix it.

As my personal punishment to Sony I've decided to skip few camera generations: I will have fun in full frame with A7II for a while, just like 2 years ago I've started to get experience in E-mount with Sony NEX-5, which was ebayed for 110$. Hopefully some day they will release something like A9RII with 60mpix sensor and fixed fake RAW issue - this camera will be worth to buy new :-)

Glad that I did not ever bought full-frame A-mount camera (A900/A99 - it would have wasted much more money after re-sale), and a little bit sad that 3 years ago I've blindly bought Sony A77II instead of some E-mount camera (like a6000) which was already available at the time...

December 10, 2017

Slicing ruby like butter

Had some success cutting synthetic Ruby rods using very thin (0.15mm) diamong cutting disk. These rods were once made for high-power pulse lasers (they were eventually made completely obsolete by much more efficient Neodimoum lasers). Chipping at the very last moment of the cut is still not completely solved.

Photo cannot show incredible fluorescence of Ruby at 692nm - longest red wavelength, almost infrared... Deepest and most intense red that human eye can see...
November 6, 2017

Blade Runner 2049

Impeccable sci-fi setting and story yields it solid 10/10 rate in my view. Hits perfect spot of what I would consider ideal sci-fi movie. Would recommend to watch it in the morning, with lowest theater occupancy.
October 6, 2017

Is 16 terabytes Enough For Anyone?

At some point of life everyone starts to feel that 16Tb storage is no longer enough and it's time to expand it. I am a little scared to think how did I managed to currently occupy 12.68Tb. But there is no time to investigate, let's throw some hardware at the problem!

After routinely adding yet another 4Tb disk to RAID6 array and trying to resize ext4 partition I was puzzled by the message:
root@lbox2:/var$ resize2fs /dev/md1
resize2fs 1.43.4 (31-Jan-2017)
resize2fs: New size too large to be expressed in 32 bits
That was totally unexpected. EXT4 does not support partitions larger than 16Tb? It appeared that it did not up until somewhat recent time. 5 years ago that would have been a brick wall, 2 years ago I would have to wrestle a little with bleeding edge resize2fs/kernel and now it all works on-the-fly out of the box. One just need to convert this ext4 partition to 64-bit format:
root@lbox2:/var$ resize2fs -b /dev/md1
resize2fs 1.43.4 (31-Jan-2017)
Converting the filesystem to 64-bit.
The filesystem on /dev/md1 is now 3907015424 (4k) blocks long.

root@lbox2:/var$ resize2fs /dev/md1
resize2fs 1.43.4 (31-Jan-2017)
Resizing the filesystem on /dev/md1 to 4883769280 (4k) blocks.
The filesystem on /dev/md1 is now 4883769280 (4k) blocks long.

root@lbox2:/var$ mount /var/bigfatdisk

root@lbox2:/var/bigfatdisk$ df . -H
Filesystem      Size  Used Avail Use% Mounted on
/dev/md1         20T   14T  5.9T  71% /var/bigfatdisk

xy@lbox2:~$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid6 sdf1[0] sdg1[7] sdh1[6] sdc1[5] sdb1[3] sde1[2] sda1[1]
      19535077120 blocks super 1.2 level 6, 128k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 1/466 pages [4KB], 4096KB chunk, file: /var/md1_intent.bin

unused devices: <none>

I remember in the late 90's i've been to ftp.cdrom.com - it had enormous 0.5Tb array: it felt like absolutely insane volume of data (I had 850Mb HDD at the time). Probably readers of this article in 2037 would have 1024-layer 3D phase-change memory with 64Tb in 2.5" drive. Good for you, readers from the future... Although it is also possible that popularization of online-content, streaming and cloud apps will make it almost obsolete to have large local storage (with a few exceptions).

BTW since I went for softraid at home about 8 years ago - I had not a single HDD failure (about 50 hdd-years). Still pays off - much less worries about at least 1 thing...
September 26, 2017

Golden autumn

September 14, 2017

Aftermath of X9.3 solar flare

Solar minimum they said... It will be spotless they said...
September 10, 2017

More Deep Space

Finished processing photos from my last trip so we all can see the results.

Let's start from Milky Way near M6, M7 (exposure 85x3.2s):
September 6, 2017