π§ͺ
Methodology
How We Test and Score
Open methodology for evaluating solar equipment: data sources, scoring rubric, and limitations.
Data collection
- β’ Manufacturer official product pages and PDF datasheets
- β’ Schema.org JSON-LD markup on retailer pages
- β’ Industry-standard certification disclosures (UL, IEC, CE)
- β’ Cross-referenced retailer feeds (Amazon, official storefronts)
We do not copy competitor databases or curated content. Specifications are factual and not subject to copyright; we reformat them into our standardized schema.
Scoring rubric
30%
Specifications quality
Capacity, output, efficiency vs class median
20%
Build & reliability
Chemistry, cycle life, warranty terms
20%
Feature set
Ports, expandability, app, charging options
20%
Value
Price-per-Wh relative to comparable models
10%
Documentation & support
Manual quality, customer support, software updates
What we do NOT do
- β’ We do not currently conduct destructive tests or independent capacity verification
- β’ We do not personally measure cycle life (manufacturer-reported, with disclaimer)
- β’ We do not boost or suppress scores based on affiliate-program participation
Limitations
Manufacturer specifications can be optimistic (especially capacity ratings). Where independent third-party tests exist (e.g., Will Prowse, HOBOTECH, DC Rainmaker for relevant adjacent categories), we cite them. Until we have testing infrastructure, editor scores should be treated as a structured comparison aid, not a substitute for hands-on review.