I already did this in another application and struggled with it for a while ...
You would divide the number of samples that the audio file has by the number of pixels that you should display on the graph. This gives you a piece. For all buckets, you calculate the minimum and maximum values ββand display them depending on the resolution used.
May provide additional examples if necessary.
Regarding graphics: (I'm not an iOS developer, but programming on a Mac isn't that much different.) Just create a subclass of NSView (must be UIView on iOS) and override the drawRect method. Then just create a function in which you pass an array of values ββfor your file and draw a bunch of lines on the screen. There is no black magic here!
This is really nothing you need for a library! And, as another positive aspect: if you keep it fairly general, you can always reuse it.
source share