1 minute read

This is Part 2 of a two-part series on crater detection using deep learning. Missed Part 1? Catch up here β†’

πŸš€ Overview

In Part 1, we built a YOLOv10 + Ellipse R-CNN pipeline to detect and localize lunar craters.

This post focuses on:

  • πŸŒ€ Rim-fitting using Ellipse R-CNN
  • πŸ“Š Evaluation metrics and precision challenges
  • βš™οΈ System integration and resource constraints
  • πŸ” Final takeaways and next steps

πŸ“Ί Watch a 1-minute summary of the project:


🧠 From Boxes to Rims: LunarLens Pipeline

Our two-stage detection system β€” LunarLens β€” works like this:

  1. YOLOv10 detects crater bounding boxes quickly.
  2. Ellipse R-CNN then fits an ellipse to the rim of each crater.

This hybrid system balances speed with shape accuracy, ideal for autonomous space navigation.

LunarLens Pipeline


πŸŒ€ Postprocessing with Ellipse R-CNN

Bounding boxes from YOLOv10 are cropped and passed to Ellipse R-CNN, which predicts:

  • (x, y) center
  • Major & minor axes
  • Orientation

This gives us a mathematically accurate rim outline.

Crater 1: (100.3, 40.8), (101.5, 39.6), ...
Crater 2: (463.2, 80.9), (462.3, 81.4), ...

πŸ“Š Evaluation & Metrics

We assessed detection quality at IoU thresholds of 50% and 70%, using AP, precision, and recall.

Metric IoU = 50 IoU = 70
AP 0.111 0.003
Recall 0.145 0.010
Precision 0.337 0.042

🧠 Key takeaway: Precision drops sharply at high IoU β€” highlighting how tight rim localization is the hard part, not detection.

πŸ§ͺ System Performance Under Constraints

We were tasked with running this system on CPU-only hardware with Raspberry Pi–level specs.

Constraint Result
Image Size 5 MP
Runtime ~20 sec / image
RAM < 4 GB
Hardware No GPU

πŸ” Final Thoughts & Next Steps

  • βœ… YOLOv10 was great for rapid detection, but rim precision required postprocessing.
  • βœ… Ellipse R-CNN helped convert detections into usable geometric inputs.
  • ⚠️ Evaluation at high IoU exposed the need for better shape-fit training.

Next steps:

  • Fine-tune Ellipse R-CNN on lunar tile crops
  • Apply weighted evaluation based on crater size
  • Improve YOLO thresholding and ensemble strategies

🀝 Acknowledgments

Built by: Anekha Sokhal, Tian Le, Henry Tran, Juan Hevia, Madeleine Harrell, Jeremy Xu Mentored by: Kyle Smith (NASA), Arko Barman, Ananya Muguli

Team selfie at NASA Johnson Space Center Anekha in spacecraft cockpit at NASA
Team group photo at NASA

The team at NASA Johnson Space Center, Houston


πŸ’¬ Have questions, or want to explore vision systems for autonomous spacecraft?
Feel free to reach out β€” I’d love to chat!