Conceptually, I get the idea on how the GeoJs goes about loading the tiles.
Is there any way to understand the math behind it? (any file which i could look into?)
Why do i need this?
My requirement was to generate a pyramid structure for the WSI image uploaded.
I was able to do this based on the ImageHeight, ImageWidth and TileSize.
Reference : https://www.gasi.ch/blog/inside-deep-zoom-2
I need to generate this pyramid structure so as to match it with the structure generated by my scanner.
But, there was some difference in tiles generated. Therefore, to be sure that there is no data loss, I have to understand how the image is loaded on the viewer first.
Hence, if the math behind it is understood then, I can be sure of no data loss.
Any help is greatly appreciated!
And, just as general note, we’ve been shifting from Gitter to Discourse (https://discourse.girder.org/).
Noted
The code is here: geojs/tileLayer.js at master · OpenGeoscience/geojs · GitHub. The functions tileAtPoint
, gcsTileBounds
, and _getTileRange
are probably the most important.
All of this is probably more complex than what you seem to need, as a lot of it handles arbitrary coordinate spaces. If you need to generate tiles from an original image, I’d recommend using libvips rather than doing it directly.
Doing it directly is straightforward: the highest resolution image is divided into tiles of whatever tile size you want. Each tile at the next lower level averages 2 x 2 tiles together. Depending on your output format, at the right and bottom edges you’ll either have partial tiles or pad the tiles with arbitrary extra data (often black). If your image size is iw x ih, your tile size is tw x ty, the image coordinates are ix, iy with 0,0 being the top left, and the tile number at any given layer is tx, ty, you’ll have ceil(iw/tw) x ceil(ih/th) tiles at the highest resolution layer. Each tile at that layer is taken from image coordinates (tx * tw, ty * th), ((tx + 1)* tw, ty * th), (tx * tw, (ty + 1) * th), ((tx + 1) * tw, (ty + 1) * th). At the next lower level, you’ll have ceil(iw/tw/2) x ceil (ih/th/2), and the tile coordinates from the original image are (tx * tw * 2, ty * th * 2), ((tx + 1)* tw * 2, ty * th * 2), (tx * tw * 2, (ty + 1) * th * 2), ((tx + 1) * tw * 2, (ty + 1) * th * 2), but of course you’ll need to downsample that region by a factor of 2 to make it the correct size. Each subsequent layer just uses the next power of two as a factor.
1 Like
Thank you for your quick reply.
The straight forward way is what I’ve been doing to generate pyramid structure for a WSI.
Say tile size is 496px, imgHeight/tilesize, imgWidth/tilesize gives the number of tiles row by column.
As girder provides the api for getting an individual tile,
GET /item/{itemId}/tiles/zxy/{z}/{x}/{y}
So iterate through imgHeight/tilesize & imgWidth/tilesize for all possible combinations for that level.
Starting from the max level to the min level and after each level math.ceil((imgHeight/tilesize)/2) & math.ceil((imgWidth/tilesize)/2).
I have to admit this is really a bad way of going about it but, it works!
I will look into libvips!
Thanks again