Initial commit - BraceIQMed platform with frontend, API, and brace generator

This commit is contained in:
2026-01-29 14:34:05 -08:00
commit 745f9f827f
187 changed files with 534688 additions and 0 deletions

View File

@@ -0,0 +1,3 @@
# You get this from deta.sh (the file server we use to store the model)
# Alternatively, you can download our model on: https://github.com/Blankeos/scoliovis-training/releases/download/latest/keypointsrcnn_weights.pt
DETA_ID=

10
scoliovis-api/.gitignore vendored Normal file
View File

@@ -0,0 +1,10 @@
venv
.deta
__pycache__
.detaignore
scoliovis_segmentation_model.h5
C AP test landmarks.csv
C AP test boundingboxes.csv
/data
.env
/models

Binary file not shown.

After

Width:  |  Height:  |  Size: 393 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 366 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 360 KiB

View File

@@ -0,0 +1,36 @@
[
{
"image": "016001.jpg",
"vertebrae_detected": 6,
"error": "Could not calculate angles"
},
{
"image": "016002.jpg",
"vertebrae_detected": 9,
"curve_type": "S",
"pt": 0.0,
"mt": 17.16,
"tl": 24.13
},
{
"image": "016003.jpg",
"vertebrae_detected": 8,
"curve_type": "S",
"pt": 0.0,
"mt": 11.93,
"tl": 15.8
},
{
"image": "016004.jpg",
"vertebrae_detected": 2,
"error": "Could not calculate angles"
},
{
"image": "016005.jpg",
"vertebrae_detected": 11,
"curve_type": "S",
"pt": 0.0,
"mt": 10.63,
"tl": 16.18
}
]

1
scoliovis-api/Procfile Normal file
View File

@@ -0,0 +1 @@
web: uvicorn main:app --host=0.0.0.0 --port=${PORT:-5000}

51
scoliovis-api/README.md Normal file
View File

@@ -0,0 +1,51 @@
# 🦴⚡ scoliovis-api
![demo](https://github.com/seajayrubynose/cafely-pictures/blob/master/_scoliovis/demo.gif?raw=true)
This repository contains the backend api for our undergraduate thesis project entitled: **_"ScolioVis: Automated Cobb Angle Measurement on Anterior-Posterior Spine X-Rays using Multi-Instance Keypoint Detection with Keypoint RCNN"_**.
A live demo is available on [https://scoliovis.app](https://scoliovis.app)
For more information on the whole project go to [blankeos/scoliovis](https://github.com/Blankeos/scoliovis).
### Built with
- Python
- FastAPI
- OpenCV
- PyTorch
### Installation
1. Clone repo
```sh
> git clone https://github.com/blankeos/scoliovis-api.git
> cd scoliovis-api
```
2. Create a virtual environment
```sh
> python -m venv venv
```
3. Activate virtual environment
```sh
> venv\Scripts\activate # windows
> source venv/Scripts/activate # bash/mac
```
4. Install dependencies
```sh
> pip install -r requirements.txt
```
5. Download the model keypointsrcnn_weights.pt and put inside /models
- Download here: [scoliovis-training/releases/keypointsrcnn_weights.pt](https://github.com/Blankeos/scoliovis-training/releases/download/latest/keypointsrcnn_weights.pt)
6. Run the server
```sh
> uvicorn main:app
```

172
scoliovis-api/REPORT.md Normal file
View File

@@ -0,0 +1,172 @@
# ScolioVis API - Test Report
## Overview
| Property | Value |
|----------|-------|
| **Repository** | scoliovis-api |
| **Source** | https://github.com/blankeos/scoliovis-api |
| **Paper** | "ScolioVis: Automated Cobb Angle Measurement using Keypoint RCNN" |
| **Model** | Keypoint R-CNN (ResNet50-FPN backbone) |
| **Output** | Vertebra landmarks (4 corners each) + 3 Cobb angles |
| **Pretrained Weights** | Yes (227 MB) |
---
## Purpose
ScolioVis detects **vertebra corners** and calculates **Cobb angles** from the detected landmarks:
- Outputs 4 keypoints per vertebra (corners)
- Calculates PT, MT, TL angles from vertebra orientations
- Provides interpretable results (can visualize detected vertebrae)
---
## Test Results (OUTPUT_TEST_1)
### Test Configuration
- **Test Dataset**: Spinal-AI2024 subset5 (test set)
- **Images Tested**: 5 (016001.jpg - 016005.jpg)
- **Weights**: Pretrained (keypointsrcnn_weights.pt)
- **Device**: CPU
### Results Comparison
| Image | GT PT | Pred PT | GT MT | Pred MT | GT TL | Pred TL | Verts |
|-------|-------|---------|-------|---------|-------|---------|-------|
| 016001.jpg | 0.0° | - | 4.09° | - | 12.45° | - | 6 (failed) |
| 016002.jpg | 7.77° | 0.0° | 21.09° | 17.2° | 24.34° | 24.1° | 9 |
| 016003.jpg | 5.8° | 0.0° | 11.17° | 11.9° | 15.37° | 15.8° | 8 |
| 016004.jpg | 0.0° | - | 11.94° | - | 20.01° | - | 2 (failed) |
| 016005.jpg | 9.97° | 0.0° | 16.88° | 10.6° | 20.77° | 16.2° | 11 |
**GT = Ground Truth, Pred = Predicted, Verts = Vertebrae Detected**
### Error Analysis (Successful Predictions Only)
| Image | PT Error | MT Error | TL Error | Mean Error |
|-------|----------|----------|----------|------------|
| 016002.jpg | -7.8° | -3.9° | -0.2° | 4.0° |
| 016003.jpg | -5.8° | +0.7° | +0.4° | 2.3° |
| 016005.jpg | -10.0° | -6.3° | -4.6° | 7.0° |
**Average Error: 4.4°** (on successful predictions)
### Success Rate
- **3/5 images** (60%) successfully calculated angles
- **2/5 images** failed (too few vertebrae detected)
---
## Output Files
```
OUTPUT_TEST_1/
├── 016001_result.png # Visualization (6 verts, failed)
├── 016002_result.png # Visualization (9 verts, success)
├── 016003_result.png # Visualization (8 verts, success)
├── 016004_result.png # Visualization (2 verts, failed)
├── 016005_result.png # Visualization (11 verts, success)
└── results.json # JSON results
```
---
## How It Works
```
Input Image (JPG/PNG)
┌─────────────────────────┐
│ Keypoint R-CNN │
│ (ResNet50-FPN) │
│ - Detect vertebrae │
│ - Predict 4 corners │
└─────────────────────────┘
┌─────────────────────────┐
│ Post-processing │
│ - Filter by score >0.5 │
│ - NMS (IoU 0.3) │
│ - Sort by Y position │
│ - Keep top 17 verts │
└─────────────────────────┘
┌─────────────────────────┐
│ Cobb Angle Calculation │
│ - Compute midpoint │
│ lines per vertebra │
│ - Find max angles │
│ - Classify S vs C │
└─────────────────────────┘
Output: {
landmarks: [...],
angles: {pt, mt, tl},
curve_type: "S" | "C"
}
```
---
## Strengths
1. **Pretrained weights available** - Ready to use
2. **Interpretable output** - Can visualize detected vertebrae
3. **Good accuracy** - 4.4° average error when detection succeeds
4. **Curve type detection** - Identifies S-curve vs C-curve
## Limitations
1. **Detection failures** - 40% failure rate on test set
2. **Requires sufficient vertebrae** - Needs ~8+ vertebrae for reliable angles
3. **Synthetic image challenges** - May perform differently on real X-rays
4. **PT angle often 0** - Model tends to underestimate proximal thoracic
---
## Usage
```bash
# Activate venv
.\venv\Scripts\activate
# Run test script
python test_subset5.py
# Or start FastAPI server
uvicorn main:app --reload
# Then POST image to /v2/getprediction
```
---
## Comparison with Seg4Reg
| Metric | ScolioVis | Seg4Reg (no weights) |
|--------|-----------|---------------------|
| Avg Error | **4.4°** | 35.7° |
| Success Rate | 60% | 100% |
| Interpretable | **Yes** | No |
| Pretrained | **Yes** | No |
**Winner**: ScolioVis (when detection succeeds)
---
## Conclusion
ScolioVis with pretrained weights produces **clinically reasonable results** (4.4° average error) when vertebra detection succeeds. The main limitation is detection reliability on synthetic images - 40% of test images had too few vertebrae detected.
**Recommendation**: Good for real X-rays; may need fine-tuning for synthetic Spinal-AI2024 images.
---
*Report generated: January 2026*
*Test data: Spinal-AI2024 subset5*

View File

@@ -0,0 +1,67 @@
[
{
"patient_id": "10",
"images": [
{
"filename": "patient10_ap.png",
"vertebrae_detected": 11,
"curve_type": "S",
"pt": 14.3,
"mt": 35.11,
"tl": 33.4,
"max_angle": 35.11,
"severity": "Moderate"
},
{
"filename": "patient10_lat.png",
"vertebrae_detected": 10,
"curve_type": "S",
"pt": 0.0,
"mt": 17.9,
"tl": 20.04,
"max_angle": 20.04,
"severity": "Mild"
}
]
},
{
"patient_id": "11",
"images": [
{
"filename": "patient11_ap.png",
"vertebrae_detected": 3,
"error": "Could not calculate angles"
},
{
"filename": "patient11_lat.png",
"vertebrae_detected": 9,
"curve_type": "S",
"pt": 8.04,
"mt": 20.31,
"tl": 0.0,
"max_angle": 20.31,
"severity": "Mild"
}
]
},
{
"patient_id": "12",
"images": [
{
"filename": "patient12_ap.png",
"vertebrae_detected": 7,
"error": "Could not calculate angles"
},
{
"filename": "patient12_lat.png",
"vertebrae_detected": 7,
"curve_type": "S",
"pt": 10.0,
"mt": 13.93,
"tl": 13.97,
"max_angle": 13.97,
"severity": "Mild"
}
]
}
]

Binary file not shown.

After

Width:  |  Height:  |  Size: 537 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

104
scoliovis-api/main.py Normal file
View File

@@ -0,0 +1,104 @@
import dotenv
dotenv.load_dotenv()
from fastapi import FastAPI, UploadFile
from fastapi.middleware.cors import CORSMiddleware
# -- 1. Create FastAPI app --
app = FastAPI()
# -- 2. Enable CORS All Origin --
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Allows all origins
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
)
# -- 3. Load models if not yet loaded --
# from scoliovis.get_model import get_detection_model # YoloV5
# detection_model = get_detection_model() # YoloV5
from scoliovis.kprcnn import predict, kprcnn_to_scoliovis_api_format # KPRCNN
# 4. -- Routes --
@app.get("/")
async def read_root():
print("Read Root started")
return {"Hello": "World", "Message": "Welcome to ScolioVis-API! Send a POST request these APIs to get started!",
"ModelPredict": "/v2/getprediction"
}
# Keypoint RCNN Model
from io import BytesIO
from PIL import Image
import cv2 as cv
import numpy as np
@app.post('/v2/getprediction')
async def get_prediction_v2(image: UploadFile):
# - Preprocess 'Image'
image = Image.open(BytesIO(await image.read())).convert('RGB') # Decode UploadFile -> PIL
image = cv.cvtColor(np.array(image), cv.COLOR_RGB2BGR) # PIL RGB -> Mat(OpenCV) RGB -> BGR
# - Keypoint RCNN Predict
bboxes, keypoints, scores = predict(image)[0]
api_object = kprcnn_to_scoliovis_api_format(bboxes, keypoints, scores, image.shape)
return api_object
# YOLO V5
# import base64
# import pandas as pd
# import json
# def detect(imgs):
# detection_model.conf = 0.50
# results = detection_model(imgs, size=640) # batch of images
# # Sort by confidence all and get top 17
# results_df = [pred_df.sort_values('confidence', ascending = False).head(17) for pred_df in results.pandas().xyxy]
# results_df_n = [pred_df.sort_values('confidence', ascending = False).head(17) for pred_df in results.pandas().xyxyn]
# # Sort by min y so they're ordered from top to bottom
# results_df = [pred_df.sort_values('ymin', ascending = True) for pred_df in results_df]
# results_df_n = [pred_df.sort_values('ymin', ascending = True) for pred_df in results_df_n]
# return results_df, results_df_n
# @app.post("/getprediction")
# async def get_prediction(image: UploadFile):
# # - Preprocess 'image'
# image = Image.open(BytesIO(await image.read())).convert('RGB') # Decode UploadFile -> PIL
# image = cv.cvtColor(np.array(image), cv.COLOR_RGB2BGR) # PIL RGB -> Mat(OpenCV) RGB -> BGR
# # - Object Detection
# results_df, results_df_n = detect(image)
# detections = json.loads(results_df[0].to_json(orient="records"))
# detections_n = json.loads(results_df_n[0].to_json(orient="records"))
# # - Create Base64 from Detection
# jpeg_string = base64.b64encode(cv.imencode('.jpg', image)[1]).decode()
# # - Landmark Detection
# # df = pd.read_csv('data/C AP test landmarks.csv', header=None)
# height, width = image.shape
# landmarks = [0.43909,0.57581,0.44172,0.56968,0.43646,0.56617,0.43032,0.55653,0.42857,0.5539,0.42244,0.53812,0.41718,0.52498,0.40578,0.50482,0.38738,0.48904,0.34882,0.45223,0.33041,0.44172,0.28396,0.3979,0.28396,0.38826,0.25592,0.37511,0.25416,0.37248,0.25855,0.37862,0.26643,0.38563,0.28571,0.41455,0.30237,0.4312,0.34531,0.48203,0.37423,0.50394,0.4128,0.57055,0.43909,0.5837,0.47415,0.62138,0.47765,0.63453,0.52585,0.66959,0.52498,0.68098,0.52147,0.68449,0.52235,0.68536,0.51183,0.66258,0.48203,0.64417,0.42682,0.6021,0.38913,0.58808,0.38124,0.57493,0.071538,0.072303,0.091048,0.096787,0.10214,0.1075,0.12663,0.13007,0.13657,0.14614,0.15647,0.17024,0.17024,0.18592,0.20161,0.215,0.21385,0.22992,0.24981,0.26473,0.26358,0.27621,0.29763,0.31331,0.30987,0.32249,0.35004,0.35501,0.36266,0.36496,0.40742,0.39977,0.42043,0.41086,0.46174,0.4407,0.47666,0.45409,0.51836,0.48852,0.52946,0.5,0.56618,0.54093,0.58569,0.55394,0.61515,0.5899,0.62969,0.61132,0.66641,0.65302,0.68133,0.6733,0.71385,0.72073,0.728,0.73718,0.76243,0.77735,0.785,0.80796,0.83818,0.85845,0.86113,0.87643,0.90207,0.90551]
# return {
# "landmarks": landmarks,
# "detections": detections,
# "normalized_detections": detections_n,
# "base64_image": jpeg_string,
# }
# Useful Stackoverflow/GitHub Solutions:
# - Convert PIL -> Mat(OpenCV) & Vice Versa
# https://stackoverflow.com/questions/14134892/convert-image-from-pil-to-opencv-format
# - How to Receive Image -> Process with Cv2 -> Return Image in FastAPI
# https://stackoverflow.com/questions/61333907/receiving-an-image-with-fast-api-processing-it-with-cv2-then-returning-it
# - Receiving UploadFile -> PIL
# https://github.com/tiangolo/fastapi/discussions/4308
# Run the application with:
# uvicorn main:app --reload

View File

@@ -0,0 +1,59 @@
fastapi
Pillow
python-multipart
opencv-python-headless
mat4py
uvicorn
gunicorn
# tensorflow-cpu
deta
pandas
python-dotenv
# YOLOv5 requirements https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt
# Usage: pip install -r requirements.txt
# Base ----------------------------------------
matplotlib>=3.2.2
numpy>=1.18.5
# opencv-python>=4.1.1
Pillow>=7.1.2
PyYAML>=5.3.1
requests>=2.23.0
scipy>=1.4.1
torch>=1.7.0 # see https://pytorch.org/get-started/locally/ (recommended)
torchvision>=0.12
tqdm>=4.64.0
# protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012
# Logging -------------------------------------
tensorboard>=2.4.1
# clearml>=1.2.0
# comet
# Plotting ------------------------------------
pandas>=1.1.4
seaborn>=0.11.0
# Export --------------------------------------
# coremltools>=6.0 # CoreML export
# onnx>=1.9.0 # ONNX export
# onnx-simplifier>=0.4.1 # ONNX simplifier
# nvidia-pyindex # TensorRT export
# nvidia-tensorrt # TensorRT export
# scikit-learn<=1.1.2 # CoreML quantization
# tensorflow>=2.4.1 # TF exports (-cpu, -aarch64, -macos)
# tensorflowjs>=3.9.0 # TF.js export
# openvino-dev # OpenVINO export
# Deploy --------------------------------------
# tritonclient[all]~=2.24.0
# Extras --------------------------------------
ipython # interactive notebook
psutil # system utilization
thop>=0.1.1 # FLOPs computation
# mss # screenshots
# albumentations>=1.0.3
# pycocotools>=2.0 # COCO mAP
# roboflow

View File

View File

@@ -0,0 +1,220 @@
import numpy as np
import math
def _create_angles_dict(pt, mt, tl):
"""
pt,mt,tl: tuple(2) that contains: (angle, [idxTop, idxBottom])
"""
return {
"pt": {
"angle": pt[0],
"idxs": [pt[1][0], pt[1][1]],
},
"mt": {
"angle": mt[0],
"idxs": [mt[1][0], mt[1][1]],
},
"tl": {
"angle": tl[0],
"idxs": [tl[1][0], tl[1][1]],
}
}
def _isS(p):
num = len(p)
ll = np.zeros([num-2,1])
for i in range(num-2):
ll[i] = (p[i][1]-p[num-1][1])/(p[0][1]-p[num-1][1]) - (p[i][0]-p[num-1][0])/(p[0][0]-p[num-1][0])
flag = np.sum(np.sum(np.dot(ll,ll.T))) != np.sum(np.sum(abs(np.dot(ll,ll.T))))
return(flag)
def cobb_angle_cal(landmark_xy, image_shape):
"""
`landmark_xy`: number[n]. [x1,x2,...,xn,y1,y2,...,yn], where
- `n` is even.
- 0 <= x <= W
- 0 <= y <= H
`image_shape`: (HEIGHT, WIDTH, CHANNELS) *only HEIGHT is important
Returns: Tuple(4): cobb_angles_list, angles_with_pos, curve_type, midpoint_lines.
- `cobb_angles_list` - For evaluating with ground-truth: ex. [0.50, 0.11, 0.33].
- `angles_with_pos` - dict of "pt", "mt", "tl", each with values for "angle" and "idxs".
- `curve_type` - "S" or "C".
- `midpoint_lines` - list of mid point line coordinates: ex. [[[x,y][x,y]], [[x,y][x,y]], ...].
"""
landmark_xy = list(landmark_xy) # input is list
ap_num = int(len(landmark_xy)/2) # number of points
vnum = int(ap_num / 4) # number of verts
first_half = landmark_xy[:ap_num]
second_half = landmark_xy[ap_num:]
# Values this function returns
cob_angles = np.zeros(3)
angles_with_pos = {}
curve_type = None
# Midpoints (2 points per vertebra)
mid_p_v = []
for i in range(int(len(landmark_xy)/4)):
x = first_half[2*i: 2*i+2]
y = second_half[2*i: 2*i+2]
row = [(x[0] + x[1]) / 2, (y[0] + y[1]) / 2]
mid_p_v.append(row)
mid_p = []
for i in range(int(vnum)):
x = first_half[4*i: 4*i+4]
y = second_half[4*i: 4*i+4]
point1 = [(x[0] + x[2]) / 2, (y[0] + y[2]) / 2]
point2 = [(x[3] + x[1]) / 2, (y[3] + y[1]) / 2]
mid_p.append(point1)
mid_p.append(point2)
# Line and Slope
vec_m = []
for i in range(int(len(mid_p)/2)):
points = mid_p[2*i: 2*i+2]
row = [points[1][0]-points[0][0], points[1][1]-points[0][1]]
vec_m.append(row)
mod_v = []
for i in vec_m:
row = [i[0]*i[0], i[1]*i[1]]
mod_v.append(row)
dot_v = np.dot(np.matrix(vec_m), np.matrix(vec_m).T)
mod_v = np.sqrt(np.sum(np.matrix(mod_v), axis=1))
dot_v = np.dot(np.matrix(vec_m), np.matrix(vec_m).T)
slopes = []
for i in vec_m:
slope = i[1]/i[0]
slopes.append(slope)
angles = np.clip(dot_v/np.dot(mod_v, mod_v.T), -1, 1)
angles = np.arccos(angles)
maxt = np.amax(angles, axis = 0)
pos1 = np.argmax(angles, axis = 0)
pt, pos2 = np.amax(maxt), np.argmax(maxt)
pt = pt*180/math.pi
cob_angles[0] = pt
if(_isS(mid_p_v)==False):
mod_v1 = np.sqrt(np.sum(np.multiply(np.matrix(vec_m[0]), np.matrix(vec_m[0]))))
mod_vs1 = np.sqrt(np.sum(np.multiply(np.matrix(vec_m[pos2]), np.matrix(vec_m[pos2])), axis=1))
mod_v2 = np.sqrt(np.sum(np.multiply(np.matrix(vec_m[int(vnum-1)]), np.matrix(vec_m[int(vnum-1)])), axis=1))
mod_vs2 = np.sqrt(np.sum(np.multiply(vec_m[pos1.item((0, pos2))], vec_m[pos1.item((0, pos2))])))
dot_v1 = np.dot(np.array(vec_m[0]), np.array(vec_m[pos2]).T)
dot_v2 = np.dot(np.array(vec_m[int(vnum-1)]), np.array(vec_m[pos1.item((0, pos2))]).T)
mt = np.arccos(np.clip(dot_v1/np.dot(mod_v1, mod_vs1.T), -1, 1))
tl = np.arccos(np.clip(dot_v2/np.dot(mod_v2, mod_vs2.T), -1, 1))
mt = mt*180/math.pi
tl = tl*180/math.pi
cob_angles[1] = mt
cob_angles[2] = tl
# DETECTION CASE 1: Spine Type C
angles_with_pos = _create_angles_dict(mt=(float(pt), [pos2, pos1.A1.tolist()[pos2]]), pt=(float(mt), [0, int(pos2)]), tl=(float(tl), [pos1.A1.tolist()[pos2], vnum-1]))
curve_type = "C"
else:
if(((mid_p_v[pos2*2][1])+mid_p_v[pos1.item((0, pos2))*2][1]) < image_shape[0]):
#Calculate Upside Cobb Angle
mod_v_p = np.sqrt(np.sum(np.multiply(vec_m[pos2], vec_m[pos2])))
mod_v1 = np.sqrt(np.sum(np.multiply(vec_m[0:pos2], vec_m[0:pos2]), axis=1))
dot_v1 = np.dot(np.array(vec_m[pos2]), np.array(vec_m[0:pos2]).T)
angles1 = np.arccos(np.clip(dot_v1/np.dot(mod_v_p, mod_v1.T), -1, 1))
CobbAn1, pos1_1 = np.amax(angles1, axis = 0), np.argmax(angles1, axis = 0)
mt = CobbAn1*180/math.pi
cob_angles[1] = mt
#Calculate Downside Cobb Angle
mod_v_p2 = np.sqrt(np.sum(np.multiply(vec_m[pos1.item((0, pos2))], vec_m[pos1.item((0, pos2))])))
mod_v2 = np.sqrt(np.sum(np.multiply(vec_m[pos1.item((0, pos2)):int(vnum)], vec_m[pos1.item((0, pos2)):int(vnum)]), axis=1))
dot_v2 = np.dot(np.array(vec_m[pos1.item((0, pos2))]), np.array(vec_m[pos1.item((0, pos2)):int(vnum)]).T)
angles2 = np.arccos(np.clip(dot_v2/np.dot(mod_v_p2, mod_v2.T), -1, 1))
CobbAn2, pos1_2 = np.amax(angles2, axis = 0), np.argmax(angles2, axis = 0)
tl = CobbAn2*180/math.pi
cob_angles[2] = tl
pos1_2 = pos1_2 + pos1.item((0, pos2)) - 1
# DETECTION CASE 2: Spine Type S, Up and Bottom
# print("case 2")
angles_with_pos = _create_angles_dict(mt=(float(pt), [pos2, pos1.A1.tolist()[pos2]]), pt=(float(mt), [int(pos1_1), int(pos2)]), tl=(float(tl), [pos1.A1.tolist()[pos2], int(pos1_2)]))
curve_type = "S"
else:
#Calculate Upside Cobb Angle
mod_v_p = np.sqrt(np.sum(np.multiply(vec_m[pos2], vec_m[pos2])))
mod_v1 = np.sqrt(np.sum(np.multiply(vec_m[0:pos2], vec_m[0:pos2]), axis=1))
dot_v1 = np.dot(np.array(vec_m[pos2]), np.array(vec_m[0:pos2]).T)
angles1 = np.arccos(np.clip(dot_v1/np.dot(mod_v_p, mod_v1.T), -1, 1))
CobbAn1 = np.amax(angles1, axis = 0)
pos1_1 = np.argmax(angles1, axis = 0)
mt = CobbAn1*180/math.pi
cob_angles[1] = mt
#Calculate Upper Upside Cobb Angle
mod_v_p2 = np.sqrt(np.sum(np.multiply(vec_m[pos1_1], vec_m[pos1_1])))
mod_v2 = np.sqrt(np.sum(np.multiply(vec_m[0:pos1_1+1], vec_m[0:pos1_1+1]), axis=1))
dot_v2 = np.dot(np.array(vec_m[pos1_1]), np.array(vec_m[0:pos1_1+1]).T)
angles2 = np.arccos(np.clip(dot_v2/np.dot(mod_v_p2, mod_v2.T), -1, 1))
CobbAn2, pos1_2 = np.amax(angles2, axis = 0), np.argmax(angles2, axis = 0)
tl = CobbAn2*180/math.pi
cob_angles[2] = tl
# pos1_2 = pos1_2 + pos1.item((0, pos2)) - 1
# DETECTION CASE 3: Spine Type S, Up and Bottom
# print("case 3")
angles_with_pos = _create_angles_dict(tl=(float(pt), [pos2, pos1.A1.tolist()[pos2]]), mt=(float(mt), [pos1_1, pos2]), pt=(float(tl), [int(pos1_2), int(pos1_1)]))
curve_type = "S"
midpoint_lines = []
for i in range(0,int(len(mid_p)/2)):
midpoint_lines.append([list(map(int, mid_p[i*2])), list(map(int, mid_p[i*2+1]))])
# Remove Numpy Values
cobb_angles_list = [float(c) for c in cob_angles]
for key in angles_with_pos.keys():
angles_with_pos[key]['angle'] = float(angles_with_pos[key]['angle'])
for i in range(len(angles_with_pos[key]['idxs'])):
angles_with_pos[key]['idxs'][i] = int(angles_with_pos[key]['idxs'][i])
return cobb_angles_list, angles_with_pos, curve_type, midpoint_lines
def keypoints_to_landmark_xy(keypoints):
"""
converts keypoints (from model)
[
[
[x,y],[x,y],[x,y],[x,y]
]
]
to
[x1,x2,x3,...,xn,y1,y2,y3,...,yn]
"""
x_points = []
for kps in keypoints:
for kp in kps:
x_points.append(kp[0])
y_points = []
for kps in keypoints:
for kp in kps:
y_points.append(kp[1])
landmark_xy = x_points + y_points
return landmark_xy

View File

@@ -0,0 +1,76 @@
import os
from pathlib import Path
# Keypoint RCNN Model
import torch
from torchvision.models.detection.rpn import AnchorGenerator
import torchvision
def _download_kprcnn_model():
print("DETA: Downloading Keypoint RCNN Model...")
from deta import Deta
deta = Deta(os.environ.get("DETA_ID"))
models = deta.Drive("models")
model_file = models.get('keypointsrcnn_weights.pt')
with open("models/keypointsrcnn_weights.pt", "wb+") as f:
for chunk in model_file.iter_chunks(1024):
f.write(chunk)
print("DETA: Keypoint RCNN model downloaded.")
model_file.close()
def get_kprcnn_model():
model_folder = Path("models")
if not model_folder.exists():
os.mkdir("models")
model_path = Path("models/keypointsrcnn_weights.pt")
# Download if the model does not exist
if model_path.is_file():
print("Keypoint RCNN Model is already downloaded.")
else:
print("Keypoint RCNN Model was NOT FOUND.")
_download_kprcnn_model()
num_keypoints = 4
anchor_generator = AnchorGenerator(sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.25, 0.5, 0.75, 1.0, 2.0, 3.0, 4.0))
model = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=False,
pretrained_backbone=True,
num_keypoints=num_keypoints,
num_classes = 2, # Background is the first class, object is the second class
rpn_anchor_generator=anchor_generator)
if model_path:
state_dict = torch.load(model_path, map_location=torch.device('cpu'))
model.load_state_dict(state_dict)
return model
# YoloV5 Model
# def _download_detection_model():
# print("DETA: Downloading Object Detection Model...")
# from deta import Deta
# deta = Deta(os.environ.get("DETA_ID"))
# models = deta.Drive("models")
# model_file = models.get('detection_model.pt')
# with open("models/detection_model.pt", "wb+") as f:
# for chunk in model_file.iter_chunks(1024):
# f.write(chunk)
# print("DETA: Object Detection model downloaded.")
# model_file.close()
# def get_detection_model():
# model_folder = Path("models")
# if not model_folder.exists():
# os.mkdir("models")
# model_path = Path("models/detection_model.pt")
# # Download if the model does not exist
# if model_path.is_file():
# print("Detection Model is already downloaded.")
# else:
# print("Detection Model was NOT FOUND.")
# _download_detection_model()
# # Get model from path and return
# model = torch.hub.load('./yolov5', 'custom', path='./models/detection_model.pt', source='local')
# return model

View File

@@ -0,0 +1,158 @@
import torch
import torchvision
from torchvision.transforms import functional as F
import numpy as np
from scoliovis.get_model import get_kprcnn_model
# DOWNLOAD THE MODEL (but don't cache)
get_kprcnn_model()
def _filter_output(output):
# 1. Get Scores
scores = output['scores'].detach().cpu().numpy()
# 2. Get Indices of Scores over Threshold
high_scores_idxs = np.where(scores > 0.5)[0].tolist() # Indexes of boxes with scores > 0.5
# 3. Get Indices after Non-max Suppression
post_nms_idxs = torchvision.ops.nms(output['boxes'][high_scores_idxs], output['scores'][high_scores_idxs], 0.3).cpu().numpy() # Indexes of boxes left after applying NMS (iou_threshold=0.3)
# 4. Get final `bboxes` and `keypoints` and `scores` based on indices
np_keypoints = output['keypoints'][high_scores_idxs][post_nms_idxs].detach().cpu().numpy()
np_bboxes = output['boxes'][high_scores_idxs][post_nms_idxs].detach().cpu().numpy()
np_scores = output['scores'][high_scores_idxs][post_nms_idxs].detach().cpu().numpy()
# 5. Get the Top 17 Scores
sorted_scores_idxs = np.argsort(-1*np_scores) # descending
np_scores = scores[sorted_scores_idxs][:18]
np_keypoints = np.array([np_keypoints[idx] for idx in sorted_scores_idxs])[:18]
np_bboxes = np.array([np_bboxes[idx] for idx in sorted_scores_idxs])[:18]
# 6. Sort by ymin
# kp[0] is the first point in [p1,p2,p3,p4]
# kp[0][1] is the y1 in p1=[x1,y1,x2,y2]
ymins = np.array([kps[0][1] for kps in np_keypoints])
sorted_ymin_idxs = np.argsort(ymins) # ascending
np_scores = np.array([np_scores[idx] for idx in sorted_ymin_idxs])
np_keypoints = np.array([np_keypoints[idx] for idx in sorted_ymin_idxs])
np_bboxes = np.array([np_bboxes[idx] for idx in sorted_ymin_idxs])
# 7. Convert everything to List Instead of Numpy
keypoints_list = []
for kps in np_keypoints:
keypoints_list.append([list(map(float, kp[:2])) for kp in kps])
bboxes_list = []
for bbox in np_bboxes:
bboxes_list.append(list(map(int, bbox.tolist())))
scores_list = np_scores.tolist()
return bboxes_list, keypoints_list, scores_list
def predict(images):
"""
images:
> List of Tensors, shape=[C, W, H]. Values 0-1. |
> Numpy array of image |
> String path to image |
> List of String paths to images
returns (bboxes, keypoints, scores)[] of n=17
"""
device = torch.device('cpu')
model = get_kprcnn_model()
model.to(device)
model.eval()
# 1. Process `images`
images_input = [F.to_tensor(images)]
images_input = [image.to(device) for image in images_input]
# 2. Inference
with torch.no_grad():
outputs = model(images_input) # 3. get output
filtered_outputs = [_filter_output(output) for output in outputs]
return filtered_outputs
from scoliovis.cobb_angle_cal import cobb_angle_cal, keypoints_to_landmark_xy
def kprcnn_to_scoliovis_api_format(bboxes, keypoints, scores, image_shape):
"""
@params
- `bboxes, keypoints, scores` - outputs from the model
- `image_shape` - (HEIGHT, WIDTH, CHANNELS)
@returns {
`detections`: {
`class`: number,
`confidence`: number,
`name`: "vert",
`xmax`: number,
`xmin`: number,
`ymin`: number,
`ymax`: number
},
`normalized_detections`: **REMOVED**,
`landmarks`: [x,y,x,y,x,y,x,y,x,y,x,y],
`angles`: {
`pt`: {
`angle`: number,
`idxs`: [number, number]
},
`mt`: {
`angle`: number,
`idxs`: [number, number]
},
`tl`: {
`angle`: number,
`idxs`: [number, number]
}
},
`midpoint_lines`: [
[[x,y],[x,y]],
[[x,y],[x,y]],
[[x,y],[x,y]]
],
`curve_type`: "S" | "C"
}
"""
detections = []
for idx, bbox in enumerate(bboxes):
detections.append({
"class": 0,
"confidence": scores[idx],
"name": "vert",
"xmin": bbox[0],
"ymin": bbox[1],
"xmax": bbox[2],
"ymax": bbox[3],
})
landmarks = []
for kps in keypoints:
for kp in kps:
landmarks.append(kp[0])
landmarks.append(kp[1])
try:
_, angles, curve_type, midpoint_lines = cobb_angle_cal(keypoints_to_landmark_xy(keypoints), image_shape)
except:
curve_type = None
angles = None
midpoint_lines = None
print("Could not calculate Cobb Angle for this Image")
return {
"detections": detections,
"landmarks": landmarks,
"angles": angles,
"curve_type": curve_type,
"midpoint_lines": midpoint_lines,
}

View File

@@ -0,0 +1,263 @@
"""
Test ScolioVis API with Balgrist Patient Data
==============================================
Runs Keypoint R-CNN inference on all patient PNG images.
Usage:
python test_balgrist.py
"""
import os
import sys
import json
from pathlib import Path
# Add parent to path for imports
sys.path.insert(0, str(Path(__file__).parent))
import cv2
import numpy as np
from PIL import Image
import matplotlib
matplotlib.use('Agg') # Non-interactive backend
import matplotlib.pyplot as plt
def load_model():
"""Load the Keypoint R-CNN model."""
print("Loading Keypoint R-CNN model...")
from scoliovis.get_model import get_kprcnn_model
import torch
model = get_kprcnn_model()
model.eval()
print("Model loaded successfully!")
return model
def predict_single(model, image_path):
"""
Run prediction on a single image.
Returns:
dict with detections, landmarks, angles, curve_type, midpoint_lines
"""
import torch
from torchvision.transforms import functional as F
from scoliovis.kprcnn import _filter_output, kprcnn_to_scoliovis_api_format
# Load image
image = Image.open(image_path).convert('RGB')
image_cv = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
# Prepare input
device = torch.device('cpu')
model.to(device)
image_tensor = F.to_tensor(image_cv)
images_input = [image_tensor.to(device)]
# Inference
with torch.no_grad():
outputs = model(images_input)
# Filter output
bboxes, keypoints, scores = _filter_output(outputs[0])
# Convert to API format
result = kprcnn_to_scoliovis_api_format(bboxes, keypoints, scores, image_cv.shape)
return result, image_cv, bboxes, keypoints
def visualize_result(image_cv, keypoints, result, output_path):
"""
Create visualization with detected vertebrae and angles.
"""
img_rgb = cv2.cvtColor(image_cv, cv2.COLOR_BGR2RGB)
fig, ax = plt.subplots(1, 1, figsize=(10, 16))
ax.imshow(img_rgb)
# Draw keypoints for each vertebra
colors = plt.cm.rainbow(np.linspace(0, 1, len(keypoints)))
for idx, (kps, color) in enumerate(zip(keypoints, colors)):
# Draw 4 corners of vertebra
xs = [p[0] for p in kps]
ys = [p[1] for p in kps]
# Connect corners: top-left -> top-right -> bottom-right -> bottom-left -> top-left
order = [0, 1, 3, 2, 0]
for i in range(4):
ax.plot([xs[order[i]], xs[order[i+1]]],
[ys[order[i]], ys[order[i+1]]],
color=color, linewidth=2)
# Mark center
cx = np.mean(xs)
cy = np.mean(ys)
ax.plot(cx, cy, 'o', color=color, markersize=5)
ax.text(cx + 20, cy, f'V{idx+1}', color=color, fontsize=8)
# Draw midpoint lines if available
if result.get('midpoint_lines'):
for line in result['midpoint_lines']:
ax.plot([line[0][0], line[1][0]],
[line[0][1], line[1][1]],
'g-', linewidth=1, alpha=0.5)
# Add angle info
if result.get('angles'):
angles = result['angles']
title_text = f"Curve Type: {result.get('curve_type', 'N/A')}\n"
title_text += f"PT: {angles['pt']['angle']:.1f}° "
title_text += f"MT: {angles['mt']['angle']:.1f}° "
title_text += f"TL: {angles['tl']['angle']:.1f}°"
ax.set_title(title_text, fontsize=12)
else:
ax.set_title("Could not calculate Cobb angles", fontsize=12)
ax.axis('off')
plt.tight_layout()
plt.savefig(output_path, dpi=150, bbox_inches='tight')
plt.close()
print(f" Visualization saved: {output_path}")
def get_severity(max_angle):
"""Classify scoliosis severity based on maximum Cobb angle."""
if max_angle < 10:
return "Normal"
elif max_angle < 25:
return "Mild"
elif max_angle < 40:
return "Moderate"
else:
return "Severe"
def main():
print("=" * 60)
print("ScolioVis API - Balgrist Patient Test")
print("=" * 60)
# Paths
balgrist_dir = Path("../PCdareSoftware/Balgrist")
output_dir = Path("balgrist_results")
output_dir.mkdir(exist_ok=True)
# Find patient folders
patient_folders = sorted([
d for d in balgrist_dir.iterdir()
if d.is_dir() and d.name.isdigit()
])
print(f"\nFound {len(patient_folders)} patient folders")
# Load model once
model = load_model()
# Results summary
all_results = []
# Process each patient
for patient_folder in patient_folders:
patient_id = patient_folder.name
print(f"\n{'='*60}")
print(f"Processing Patient {patient_id}")
print("=" * 60)
# Find PNG files (AP and Lateral)
png_files = list(patient_folder.glob("*.png"))
patient_results = {"patient_id": patient_id, "images": []}
for png_file in png_files:
print(f"\n Image: {png_file.name}")
try:
# Run prediction
result, image_cv, bboxes, keypoints = predict_single(model, png_file)
# Get angles
if result.get('angles'):
angles = result['angles']
pt = angles['pt']['angle']
mt = angles['mt']['angle']
tl = angles['tl']['angle']
max_angle = max(pt, mt, tl)
severity = get_severity(max_angle)
print(f" Vertebrae detected: {len(keypoints)}")
print(f" Curve type: {result.get('curve_type', 'N/A')}")
print(f" PT: {pt:.1f}°")
print(f" MT: {mt:.1f}°")
print(f" TL: {tl:.1f}°")
print(f" Max angle: {max_angle:.1f}° ({severity})")
image_result = {
"filename": png_file.name,
"vertebrae_detected": len(keypoints),
"curve_type": result.get('curve_type'),
"pt": round(pt, 2),
"mt": round(mt, 2),
"tl": round(tl, 2),
"max_angle": round(max_angle, 2),
"severity": severity
}
else:
print(f" Vertebrae detected: {len(keypoints)}")
print(f" Could not calculate Cobb angles")
image_result = {
"filename": png_file.name,
"vertebrae_detected": len(keypoints),
"error": "Could not calculate angles"
}
patient_results["images"].append(image_result)
# Save visualization
output_filename = f"patient{patient_id}_{png_file.stem}_result.png"
output_path = output_dir / output_filename
visualize_result(image_cv, keypoints, result, output_path)
except Exception as e:
print(f" ERROR: {e}")
patient_results["images"].append({
"filename": png_file.name,
"error": str(e)
})
all_results.append(patient_results)
# Save JSON results
results_file = output_dir / "balgrist_results.json"
with open(results_file, 'w') as f:
json.dump(all_results, f, indent=2)
print(f"\nResults saved to: {results_file}")
# Print summary
print("\n" + "=" * 60)
print("SUMMARY")
print("=" * 60)
print(f"\n{'Patient':<10} {'Image':<25} {'Verts':<8} {'Type':<6} {'PT':<8} {'MT':<8} {'TL':<8} {'Max':<8} {'Severity':<10}")
print("-" * 100)
for patient in all_results:
for img in patient["images"]:
if "error" not in img or "vertebrae_detected" in img:
print(f"{patient['patient_id']:<10} "
f"{img['filename']:<25} "
f"{img.get('vertebrae_detected', 'N/A'):<8} "
f"{img.get('curve_type', 'N/A'):<6} "
f"{img.get('pt', 'N/A'):<8} "
f"{img.get('mt', 'N/A'):<8} "
f"{img.get('tl', 'N/A'):<8} "
f"{img.get('max_angle', 'N/A'):<8} "
f"{img.get('severity', 'N/A'):<10}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,141 @@
"""
Test ScolioVis API with Spinal-AI2024 subset5 images
"""
import sys
import json
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent))
import cv2
import numpy as np
from PIL import Image
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
def load_model():
print("Loading Keypoint R-CNN model...")
from scoliovis.get_model import get_kprcnn_model
model = get_kprcnn_model()
model.eval()
print("Model loaded!")
return model
def predict_single(model, image_path):
import torch
from torchvision.transforms import functional as F
from scoliovis.kprcnn import _filter_output, kprcnn_to_scoliovis_api_format
image = Image.open(image_path).convert('RGB')
image_cv = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
device = torch.device('cpu')
model.to(device)
image_tensor = F.to_tensor(image_cv)
images_input = [image_tensor.to(device)]
with torch.no_grad():
outputs = model(images_input)
bboxes, keypoints, scores = _filter_output(outputs[0])
result = kprcnn_to_scoliovis_api_format(bboxes, keypoints, scores, image_cv.shape)
return result, image_cv, keypoints
def visualize_result(image_cv, keypoints, result, output_path):
img_rgb = cv2.cvtColor(image_cv, cv2.COLOR_BGR2RGB)
fig, ax = plt.subplots(1, 1, figsize=(8, 12))
ax.imshow(img_rgb)
colors = plt.cm.rainbow(np.linspace(0, 1, len(keypoints)))
for idx, (kps, color) in enumerate(zip(keypoints, colors)):
xs = [p[0] for p in kps]
ys = [p[1] for p in kps]
order = [0, 1, 3, 2, 0]
for i in range(4):
ax.plot([xs[order[i]], xs[order[i+1]]],
[ys[order[i]], ys[order[i+1]]],
color=color, linewidth=2)
cx, cy = np.mean(xs), np.mean(ys)
ax.plot(cx, cy, 'o', color=color, markersize=4)
if result.get('angles'):
angles = result['angles']
title = f"Type: {result.get('curve_type', 'N/A')}\n"
title += f"PT: {angles['pt']['angle']:.1f}° MT: {angles['mt']['angle']:.1f}° TL: {angles['tl']['angle']:.1f}°"
ax.set_title(title, fontsize=10)
else:
ax.set_title("Could not calculate angles", fontsize=10)
ax.axis('off')
plt.tight_layout()
plt.savefig(output_path, dpi=150, bbox_inches='tight')
plt.close()
def main():
test_images = [
"../data/Spinal-AI2024/Spinal-AI2024-subset5/016001.jpg",
"../data/Spinal-AI2024/Spinal-AI2024-subset5/016002.jpg",
"../data/Spinal-AI2024/Spinal-AI2024-subset5/016003.jpg",
"../data/Spinal-AI2024/Spinal-AI2024-subset5/016004.jpg",
"../data/Spinal-AI2024/Spinal-AI2024-subset5/016005.jpg",
]
output_dir = Path("OUTPUT_TEST_1")
output_dir.mkdir(exist_ok=True)
model = load_model()
results = []
for img_path in test_images:
img_name = Path(img_path).stem
print(f"\nProcessing {img_name}...")
result, image_cv, keypoints = predict_single(model, img_path)
# Save visualization
output_path = output_dir / f"{img_name}_result.png"
visualize_result(image_cv, keypoints, result, output_path)
print(f" Saved: {output_path}")
# Collect results
if result.get('angles'):
angles = result['angles']
results.append({
"image": img_name + ".jpg",
"vertebrae_detected": len(keypoints),
"curve_type": result.get('curve_type'),
"pt": round(angles['pt']['angle'], 2),
"mt": round(angles['mt']['angle'], 2),
"tl": round(angles['tl']['angle'], 2)
})
print(f" Vertebrae: {len(keypoints)}, PT: {angles['pt']['angle']:.1f}°, MT: {angles['mt']['angle']:.1f}°, TL: {angles['tl']['angle']:.1f}°")
else:
results.append({
"image": img_name + ".jpg",
"vertebrae_detected": len(keypoints),
"error": "Could not calculate angles"
})
print(f" Vertebrae: {len(keypoints)}, Could not calculate angles")
# Save JSON results
with open(output_dir / "results.json", 'w') as f:
json.dump(results, f, indent=2)
print(f"\nResults saved to {output_dir}/results.json")
if __name__ == "__main__":
main()