Use NSValueTransformer to convert from your image to an NSData object, and then save the data block to Core Data. You can register your subclass of transformer value in the simulation tool. You can check an example of Apple PhotoLocations or this tutorial shows how.
Edit completeness: as others have indicated that a data block that is too large can cause performance problems. As @Jesse noted, iOS5 has an optimization where, if the drop of data is too large, then Core Data will save it outside the permanent storage. If you need to target pre-iOS5 and the image is too large, you must save the file somewhere in the sandbox and save the URL in the master data repository. A good discussion at the Apple Dev Forums is here and discusses the limits of data storage in the underlying data.
Luck
source share