#### How to measure distance on linear polar transformed data using OpenCV

I am working on some image data that I am doing a polar transform on where I want to measure the width of bright rings in a circular type object.

So far I have something like this using faux data:

``````import cv2
import numpy as np

img = cv2.imread('testimg.tif')
img_gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

#threshold image to calculate center of object
ret,thresh = cv2.threshold(img_gry,254,255,cv2.THRESH_BINARY_INV)
M = cv2.moments(thresh)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])

#convert white space around object to 0 intensity
img_gry[img_gry == 255] = 0

#calculate radius of image to be used for polar transform
radius = np.sqrt(((img_gry.shape/2.0)**2.0)+((img_gry.shape/2.0)**2.0))

#transform using center coordinates and radius
polar_image = cv2.linearPolar(img_gry,(cX, cY), radius, cv2.WARP_FILL_OUTLIERS)
polar_image = polar_image.astype(np.uint8)

#add gaussian smoothing
polar_blurred = cv2.GaussianBlur(polar_image,(3,3),0)
``````

This image looks something like this: And I will be looking at slices of the data that show intensity, like such: My question from here is what formula to use to calculate the width of the bright peaks in the image. I don’t really know what type of axes are used for displaying this transformation, which underlies my problem. For example, my non-transformed peaks have a width of ~3px, but the transformed data has a peak width of 8 units (radians? no clue). I’m wondering how exactly I can estimate the actual width of my non-transformed data based off the "distance" in this polar transformed data.

Source: Python Questions