When I launched Bassists.com, the site worked. The architecture was solid, the discovery loop was clicking, and early visitors were spending real time exploring. But there was a problem I hadn’t solved yet: getting content onto the site efficiently without cutting corners on accuracy.
I launched with 12 bassist profiles in a soft launch, with a list of 300 to build toward. Each profile includes a bio, a curated discography, associated bands, genres, news, and videos. That’s a lot of data per bassist, and the data has to be right. This isn’t a wiki. My name is on it.
Why music databases didn’t work for what I needed
I started with the obvious sources: MusicBrainz and Discogs. Both have extensive databases and APIs. They’re great for a lot of things. I figured I’d pull discographies from one of them and save myself hours of research per bassist.
Then I looked at the data through the lens of what I was building.
My favorite bassist is Jaco Pastorius. According to one of these databases, Jaco was associated with Blood, Sweat & Tears. He jammed with them, someone recorded it, and that recording made it into the database. But Jaco never recorded a studio or live album with them. For the kind of map I was building, that distinction matters. Musicians jam with each other all the time, and I love those recordings, but they’re not the same as a discography credit. Including them would change the connections between artists in ways that didn’t make sense for what I was trying to show.
It wasn’t just Jaco. Across multiple bassists, I was seeing bootlegs, unreleased recordings, and live sit-ins treated the same as legitimate studio and live album credits. These databases are comprehensive by design; they catalog everything. But Bassists.com is built on curated credibility. I can’t have a profile that says a bassist was in a band they were never actually part of.
So I switched approaches. Every discography on Bassists.com is curated by me. I research which albums a bassist actually recorded on and build what I believe is a credible list. By virtue of that decision, the correct band and artist associations are created automatically. The data is clean because a human with decades of knowledge about this music is making the editorial calls.
Building a WordPress content import tool with Claude Code
Curating by hand is the right call for accuracy. But it could have been the biggest bottleneck in this project’s history if I hadn’t considered efficiency from the start.
I worked with Claude Code to build an import tool. The input is simple: a text file with the bassist’s name, an array of bands and artists they recorded with, and the albums for each. That’s it.

When I upload that file through the WordPress admin, the system does the rest. It creates the bassist’s profile. It checks whether each band or artist already exists on the site. If not, it creates it. It checks whether each album already exists. If so, it adds the new bassist to that album’s credits. If not, it creates the album and assigns the relationship.
One import file can cascade into dozens of new pages. When I uploaded a particularly prolific session musician, his single file added 60 band and artist profiles to the site, plus all the associated albums. One import, and 60 new pages appeared with all the correct relationships in place.
That efficiency is what makes scaling to 300 profiles realistic. The import system handles the mechanical work. I handle the editorial decisions.
Using ChatGPT and Gemini to fact-check music discographies
The curation still requires research. For each bassist, I need to verify their discography: which albums they actually played on, which bands they were legitimately part of, and whether anything is missing.
This is where I use AI, but not as the authority. I use ChatGPT and Gemini as research assistants and have them check each other.
I’ll pull a discography list and run it through both. I ask each one: Can you fact-check that this bassist was actually on these albums? Are there any missing? Are there newer releases I might not know about? Then I cross-correlate the results. When one says an album belongs, and the other doesn’t mention it, I dig in.
Everything is curated by me in the end. I put my stamp on every profile. But AI handles the mechanical research and cross-referencing that would have taken me three or four hours per bassist. Now it takes a fraction of that, with all the same checkpoints.
Automating Amazon and Apple Music links with a Python script
Each album on Bassists.com links to Amazon and Apple Music. Apple’s API also provides 90-second sample clips, so there’s a built-in player for every album. Visitors click a record to listen to a preview right on the site.

Getting those links for thousands of albums would have been another bottleneck. I had Claude Code write a Python script that takes the text file of band names and album titles, crawls both platforms for the Amazon ASIN and Apple Music ID, and then updates the import file and generates a report with all the albums and a label: “Good Matches,” “Uncertain/Missing.” I’ve found that “Good Matches” is highly reliable. “Uncertain” is a mix, and I fix the wrong ones. “Missing” is not always missing. It’s a slight deviation in an artist or album name, which I fix.
The script does the lookup. I do the judgment.
Where AI helps and where only expertise will do
There’s important work on Bassists.com that doesn’t require my expertise. Looking up an Amazon ASIN is not a creative act. Cross-referencing an album title across two databases is mechanical. Those are the things AI should handle.
But knowing that Jaco Pastorius was never in Blood, Sweat & Tears? That’s years of listening to this music. No API knows that. No AI caught it. I did.
The system I built keeps both of those things in their right place. AI handles the mechanical work. I handle the judgment. And every profile on the site has my name behind it because I’ve verified it, not because I’ve automated it.



