Stack & Stars
Do storefront tech choices rhyme with public sentiment? Compare reviews and ratings across inferred commerce stacks.
Commerce-native stacks are associated with higher review volume and slightly higher ratings within the same cities.
Question: Are Google review counts and ratings associated with the inferred commerce stack?
Method: Infer stack labels from BuiltWith + capture tech + network fingerprints, then compare review distributions by stack with within-city comparisons.
Prediction: Median review counts differ by stack in high-N cities, not just in national aggregates.
Test: Compare medians and deciles by stack; repeat within top cities to reduce aggregation bias.
A shelf of storefronts appears: each platform becomes a section.
Jars fill with starlight: review volume becomes glow; ratings lift the shelf line upward.
We zoom into cities to test whether differences persist when geography is held constant.
- cannabis_stores_canada.sqlite
- 11_stack_and_stars_tech_vs_reviews.json
- Reviews are a public attention proxy, not revenue.
- Stacks are inferred; BuiltWith/capture data can miss or mislabel vendors.
- Geography confounds most aggregates — prefer within-city views.