Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: CoverDownloader (Read 332652 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

CoverDownloader

Reply #125
Are there any tutorials or additional information on how to develop scripts for this component? Maybe a generic script with commonly used routines to pull information off pages?  I sat and tried to whip one up for Google Image but I got nowhere fast.  The only script this component comes with that seems to be at all useful is the amazon script, but amazon's selection is rather limited if your listening habits stray outside western mainstream media.

I did a bit of googling about boo and while it does resemble python most of what I found was simple "hello world" type of stuff and not helpful in parsing html/javascript for image links. And to be completely honest my interest in boo does not extend much past using it for this component.

I currently use TAGZ to spit out an url with foo_run to launch Google Images and the results are usually far better than anything the existing scripts for this component provide.

CoverDownloader

Reply #126
Are there any tutorials or additional information on how to develop scripts for this component? Maybe a generic script with commonly used routines to pull information off pages?  I sat and tried to whip one up for Google Image but I got nowhere fast.  The only script this component comes with that seems to be at all useful is the amazon script, but amazon's selection is rather limited if your listening habits stray outside western mainstream media.

I did a bit of googling about boo and while it does resemble python most of what I found was simple "hello world" type of stuff and not helpful in parsing html/javascript for image links. And to be completely honest my interest in boo does not extend much past using it for this component.

I currently use TAGZ to spit out an url with foo_run to launch Google Images and the results are usually far better than anything the existing scripts for this component provide.


I have never used boo for anything else, but I do use python, and the syntax is almost identical. However boo lacks the standard library of Python, instead you use the .NET framework, documentation for which can be found here.

I would recommend starting by looking at the Walmart script. I've added some code with comments here:
Code: [Select]
# refs: System.Web

This line adds a reference to System.Web (the # refs: thing is picked up by my own 'preprocessor' before boo compiles the script. Note that a reference to System.Web isn't necessary because util (imported below) already references it
Code: [Select]
namespace CoverSources

You must use the CoverSources namespace for the class that will perform the search. Do not place other classes in the CoverSources namespace (AFAIK Boo only supports one namespace per source file, so put helper classes in another file and import them [like I have done with 'util'])
Code: [Select]
import System.Text.RegularExpressions
import util

util is just some helper functions (GetPage, EncodeUrl). See util.boo for details.
Code: [Select]
class Walmart:

Time to define the class that will do the work. Note that this class is never instantiated so eveything has to be 'static'
Code: [Select]
    static SourceName as string:
        get: return "Walmart"
    static SourceVersion as decimal:
        get: return 0.2

The above two properties must be defined.

Now comes the function which fetches a list of thumbnails, It always takes these 3 parameters:
(Reference to a callback class,a (unicode) string containing the name of the artist,another (unicode) string with the name of the album)
Either of the two strings can be empty ("")
Code: [Select]
    static def GetThumbs(coverart,artist,album):
            query = artist+" "+album
            params = 'search_query=' + EncodeUrl(query)
            text = GetPage("http://www.walmart.com/catalog/search-ng.gsp?Continue.x=0&Continue.y=0&Continue=Find&ics=20&ico=0&search_constraint=4104&" + params)

Now we use some regular expressions (System.Text.RegularExpressions.Regex) to extract the info we need from the page we fetched using GetPage(). This is just one way of parsing the page, you could also search for certain bits of text using the String.* functions.
Note there are two types of result page walmart can return, one is a list of results, the other is when it goes directly to the product page.
Code: [Select]
            r = Regex("<a\\shref=\"/catalog/product\\.do\\?product_id=[0-9]+\"><img\\ssrc=\"([^\"]+)60X60.gif\"[^>]+alt=\"([^\"]*)\"[^>]*>",RegexOptions.Multiline)
            r2 = Regex("""<a\shref="java script:photo_opener\('(http://i.walmart.com/[a-zA-Z0-9/]+_)500X500.jpg""",RegexOptions.Multiline)
            if r2.IsMatch(text):
                result = r2.Match(text)
                coverart.AddThumb(result.Groups[1].Value+"150X150.jpg","Product Page",500,500,result.Groups[1].Value)
                return
            iterator = r.Matches(text)
            coverart.SetCountEstimate(iterator.Count)
            for result as Match in iterator:
                coverart.AddThumb(result.Groups[1].Value+"150X150.jpg",result.Groups[2].Value,500,500,result.Groups[1].Value)

<callback object>.SetCountEstimate() is not necessary, but allows the GUI to display a progress like ('3/10' instead of just '3')
<callback object>.AddThumb() adds a thumbnail result. The parameters are as follows, in this order:
  • thumbnail url as a string OR System.Stream of thumbnail image data OR thumbnail as a System.Drawing.Image,
  • the name of the result as a string (unicode),
  • the width of the FULL SIZE image (not the thumbnail) - note these values are not actually displayed anywhere because they are usually just a guess, so just use 0,
  • an arbitrary piece of data that will be given to you in the procedure below, so include all the data you need to retrieve the full size art. If you need to include more than one value you can use an array or struct or something.
The following procedure is called when the user selects a piece of art to download, or to preview. Note that it will only ever be called once for a particular piece of art, due to caching. There is only one argument, which is the data you passed as the last argument of AddThumb() (above).

As for the first argument of AddThumb, the return value for this function can be either a URL, a System.Stream, but this time NOT a System.Drawing.Image.
Code: [Select]
    static def GetResult(param):
            return param as string + "500X500.jpg"


I hope this helps you a bit. I think the forum will mess up the indenting, but if you look at the original script you should be able to work it out. You don't need to worry about error handling, the application will catch any exceptions thrown. If you want a good IDE with intellisense, you can install SharpDevelop.
Your scripts can be debugged using Visual Studio or the standalone CLR debugger that comes with the free .NET SDK.

And if you do write a script, or you need help writing one, please post it here.

Thanks.

CoverDownloader

Reply #127
Hey david. awesome plugin! 

I do have one request, and I'm not sure - it may already be implemented:  What I'd like is that if I'm listening to an album that I don't have a cover yet, it will automatically run the search and download one for me.

I have something like this working for me at the moment using Samurize AMPI 'getAlbumCover'. I use
Code: [Select]
C:\Program Files\Samurize\covers\%artist% - %album%*
&
Code: [Select]
C:\Program Files\Samurize\covers\%artist% - %title%*
in Album Art Panel preferences to show the covers downloaded by Samurize.

CoverDownloader

Reply #128
Thank you david for that breakdown I think I'm almost there but could use a little help, I am obviously a noob at this boo language.  This is my script so far:
Code: [Select]
namespace CoverSources
import System.Text.RegularExpressions
import util
class GoogleImage:
static SourceName as string:
get: return "GoogleImage"
static SourceVersion as decimal:
get: return 0.2
static def GetThumbs(coverart,artist,album):
        query = artist+" "+album
        params = EncodeUrl(query)
        text = GetPage("http://images.google.com/images?q=" + params)
        r = Regex("dyn.img(\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\")",RegexOptions.Multiline)
        iterator = r.Matches(text)
        coverart.SetCountEstimate(iterator.Count)
        for result as Match in iterator:
            coverart.AddThumb("http://www.images.google.com/images?q=tbn:" + result.Groups[3].Value + result.Groups[4].Value,result.Groups[7].Value,0,0,result.Groups[4].Value)
static def GetResult(param):
        return "http://"+param
I think my problem is with the "r=Regex()" line.  The google image page contains a line toward the bottom wich contains (after some other code) groupings of 14 variables, each within quotes, each then separated by a comma, inside a function dyn.img(), and then each group is separated by a semicolon.  I think I've got most of the mechanics down except for the bit where it parses the page for these groupings.  I will continue to tinker around with it but any input you could provide would be welcome.

Also, how would I substitute all spaces in a string with a plus sign?

CoverDownloader

Reply #129
Thank you david for that breakdown I think I'm almost there but could use a little help, I am obviously a noob at this boo language.  This is my script so far:
Code: [Select]
namespace CoverSources
import System.Text.RegularExpressions
import util
class GoogleImage:
static SourceName as string:
get: return "GoogleImage"
static SourceVersion as decimal:
get: return 0.2
static def GetThumbs(coverart,artist,album):
        query = artist+" "+album
        params = EncodeUrl(query)
        text = GetPage("http://images.google.com/images?q=" + params)
        r = Regex("dyn.img(\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\",\"[^>]\")",RegexOptions.Multiline)
        iterator = r.Matches(text)
        coverart.SetCountEstimate(iterator.Count)
        for result as Match in iterator:
            coverart.AddThumb("http://www.images.google.com/images?q=tbn:" + result.Groups[3].Value + result.Groups[4].Value,result.Groups[7].Value,0,0,result.Groups[4].Value)
static def GetResult(param):
        return "http://"+param
I think my problem is with the "r=Regex()" line.  The google image page contains a line toward the bottom wich contains (after some other code) groupings of 14 variables, each within quotes, each then separated by a comma, inside a function dyn.img(), and then each group is separated by a semicolon.  I think I've got most of the mechanics down except for the bit where it parses the page for these groupings.  I will continue to tinker around with it but any input you could provide would be welcome.
Try this:
Code: [Select]
r=Regex("""dyn\.img\("([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)"\)""")
I haven't tested it in a script but I know the regular expression works. Note that using the triple quotes (""") avoids the need for escaping every quote sign and backslash.

Quote
Also, how would I substitute all spaces in a string with a plus sign?
This should do it:
Code: [Select]
query.Replace(' ','+')

Good luck, and I'm sure many would appreciate it if you share your script when you're finished.

 

CoverDownloader

Reply #130
Current code:
Code: [Select]
namespace CoverSources
import System.Text.RegularExpressions
import util
class GoogleImage:
static SourceName as string:
get: return "GoogleImage"
static SourceVersion as decimal:
get: return 0.2
static def GetThumbs(coverart,artist,album):
#         query = artist+" "+album
#         params = EncodeURL(query)
        params = "test"
        text = GetPage("http://images.google.com/images?q="+params)
        r = Regex("""dyn\.img\("([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)"\)""")
        iterator = r.Matches(text)
        coverart.SetCountEstimate(iterator.Count)
        for result as Match in iterator:
            coverart.AddThumb("http://www.images.google.com/images?q=tbn:"+result.Groups[3].Value+result.Groups[4].Value,result.Groups[7].Value,0,0,result.Groups[4].Valu
e)
static def GetResult(param):
        return "http://"+param

Well it's still indicating 0/0 images. I commented out the artist & album and set params to a known-good static string to rule out improper processing of the album & artist.  If I view the source of the page http://images.google.com/images?q=test I can see text I'm looking for in the latter portion of the line 4 lines up from the bottom.  My understanding of how boo parses the Regex line with it's tokens is very limited so I might be doing something blatantly wrong.  is [^>] the token for marking text to be later called with Groups[n] ? and what is the significance of the asterisks and parenthesis in your example?

CoverDownloader

Reply #131
Current code:
Code: [Select]
namespace CoverSources
import System.Text.RegularExpressions
import util
class GoogleImage:
static SourceName as string:
get: return "GoogleImage"
static SourceVersion as decimal:
get: return 0.2
static def GetThumbs(coverart,artist,album):
#         query = artist+" "+album
#         params = EncodeURL(query)
        params = "test"
        text = GetPage("http://images.google.com/images?q="+params)
        r = Regex("""dyn\.img\("([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)"\)""")
        iterator = r.Matches(text)
        coverart.SetCountEstimate(iterator.Count)
        for result as Match in iterator:
            coverart.AddThumb("http://www.images.google.com/images?q=tbn:"+result.Groups[3].Value+result.Groups[4].Value,result.Groups[7].Value,0,0,result.Groups[4].Valu
e)
static def GetResult(param):
        return "http://"+param

Well it's still indicating 0/0 images. I commented out the artist & album and set params to a known-good static string to rule out improper processing of the album & artist.  If I view the source of the page http://images.google.com/images?q=test I can see text I'm looking for in the latter portion of the line 4 lines up from the bottom.  My understanding of how boo parses the Regex line with it's tokens is very limited so I might be doing something blatantly wrong.  is [^>] the token for marking text to be later called with Groups[n] ? and what is the significance of the asterisks and parenthesis in your example?
These Regular Expressions are almost a programming language of their own, and are not specific to Boo. They are incredibly useful for parsing text etc., so anything you learn about them is probably not a waste of time. Scripting languages like Perl have builtin support for regex, as does the .NET framework.

This page outlines the basic syntax:
http://en.wikipedia.org/wiki/Regular_expressions#Syntax

To answer your question, anything in round brackets ( () ), denotes a group, and anything in square brackets denotes a set of characters to match. A caret (^) means 'match all characters except these,', and * means match 0 or more repetitions of these characters, while + means 'one or more'. I used * because some of the groups may be empty. So
Code: [Select]
"([^"]*)"
means: Find two quote signs, and capture (as a group) the text in between them.
Code: [Select]
<img[^>]*>
would match any <img> tag, even if it had arguments, like <img src=blah>.

Hope this helps.

I'm not at home at the moment so I can't test your script, but post back if you are still having trouble and I will help you tonight.

CoverDownloader

Reply #132
I am continuing to have no luck with this script. Does the script match line-by-line or is it intelligent enough to find multiple instances on the same line?  For example, this page contanes code like this:
Code: [Select]
(blablahblahlotsofuselesscode);dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");

Instead of

Code: [Select]
(blahblahblahlotsofuseesscode);
dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");
dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");
dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");

Where each dyn.Img() represents a different image and a-h is the data I need to pull for use in result.Groups[n]. Value

I will continue to hammer away at this code, but working with this script makes me want to kill things.

CoverDownloader

Reply #133
I am continuing to have no luck with this script. Does the script match line-by-line or is it intelligent enough to find multiple instances on the same line?  For example, this page contanes code like this:
Code: [Select]
(blablahblahlotsofuselesscode);dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");

Instead of

Code: [Select]
(blahblahblahlotsofuseesscode);
dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");
dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");
dyn.Img("a","","b","c","d","100","100","e","","","f","g","h","","");

Where each dyn.Img() represents a different image and a-h is the data I need to pull for use in result.Groups[n]. Value
It should match them,even they are on the same line. In the past I've found this tool helpful for testing regular expressions:
http://www.codeproject.com/dotnet/expresso.asp
Quote
I will continue to hammer away at this code, but working with this script makes me want to kill things.
Please restrict yourself to insects and small rodents

CoverDownloader

Reply #134
You could go on forever, it will never work!

Google sends a different page to the Coverdownloader program than to your web browser.
Google checks the http GET values and if it recognizes a browser you'll get the JS code.
But this program only sends minimal infos and it gets a page with pure html.

CoverDownloader

Reply #135
You could go on forever, it will never work!

Google sends a different page to the Coverdownloader program than to your web browser.
Google checks the http GET values and if it recognizes a browser you'll get the JS code.
But this program only sends minimal infos and it gets a page with pure html.


Ah, thanks for that, normally when I develop a script I write the parser based on what I capture in Ethereal.

To get this script to work, you can either dump the html that google is using, either from the script or by capturing packets with ethereal while running the script, and write a parser based on that, or emulate a browser by sending the right headers.

For the latter, a function like this may work (untested):
Code: [Select]
def GetPageAsFirefox(url as string):
            request = System.Net.HttpWebRequest.Create(url)
            request.Headers.Add(System.Net.HttpRequestHeader.UserAgent,"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1b1) Gecko/20060707 Firefox/2.0b1")
            response = request.GetResponse()
            s=System.IO.StreamReader(response.GetResponseStream())
             return s.ReadToEnd()

CoverDownloader

Reply #136
Thank you for the expresso link david, that is a very useful tool and it appears the string you ave me awhile back was proper and I spent most of my time last evening coding around in circles.

Thanks for the info dano, I didnt think to look at useragent strings. I plugged in some random strings for my firefox useragent and you are right google is playing dirty.  I'm sure google isnt the only site out there that imploys this easy check to annoy developers, david have you considered adding a configurable useragent string in the CoverDownloader program itself? Many download managers have this option for precisely this reason.

Looks like I've picked a hell of a site to work with, I'll try stabbing away with your firefox function and see what happens.

CoverDownloader

Reply #137
Using this page as a reference, I replaced the def GetPage in util.boo with this slight variation on your code:
Code: [Select]
def GetPage(url as string):
            request = System.Net.HttpWebRequest.Create(url)
            request.Headers.Add(System.Net.HttpRequestHeader.UserAgent,"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4) Gecko/20060508 Firefox/1.5.0.4")
            response = request.GetResponse()
            s=System.IO.StreamReader(response.GetResponseStream())
            return s.ReadToEnd()

At first I tried it as a separate function and called that with my GoogleImage script however it did not work.  I then replaced the normal GetPage function to see if it would atleast work with the existing Amazon script.  It did indeed work with Amazon. And while maybe it just had to do with the time of day and what I searched for but with the above GetPage routine the amazon page appeared to return the results much faster than with the original routine.

So, assuming this new function is working I guess I'll return to my original script.

CoverDownloader

Reply #138
Using this page as a reference, I replaced the def GetPage in util.boo with this slight variation on your code:
Code: [Select]
def GetPage(url as string):
            request = System.Net.HttpWebRequest.Create(url)
            request.Headers.Add(System.Net.HttpRequestHeader.UserAgent,"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4) Gecko/20060508 Firefox/1.5.0.4")
            response = request.GetResponse()
            s=System.IO.StreamReader(response.GetResponseStream())
            return s.ReadToEnd()

At first I tried it as a separate function and called that with my GoogleImage script however it did not work.  I then replaced the normal GetPage function to see if it would atleast work with the existing Amazon script.  It did indeed work with Amazon. And while maybe it just had to do with the time of day and what I searched for but with the above GetPage routine the amazon page appeared to return the results much faster than with the original routine.

So, assuming this new function is working I guess I'll return to my original script.

Interesting, because the amazon script uses the amazon web services, which is an xml/soap based service designed especially for this sort of thing and returns pages like this:
http://xml.amazon.com/onca/xml3?f=xml&...h=iron%20maiden

CoverDownloader

Reply #139
Well the amazon script seems to work fine with this new function, however both walmart and my googleimage scripts are kicking out this error:


And for reference here is my current GoogleImage.boo:
Code: [Select]
namespace CoverSources
import System.Text.RegularExpressions
import util
class GoogleImage:
static SourceName as string:
get: return "GoogleImage"
static SourceVersion as decimal:
get: return 0.2
static def GetThumbs(coverart,artist,album):
#         query = artist+" "+album
#         params = EncodeURL(query)
#         params.Replace('%20','+')
        params = "test"
        text = GetPage("http://images.google.com/images?q="+params)
        r = Regex("""dyn\.Img\("([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)","([^"]*)"\)""")
        iterator = r.Matches(text)
        coverart.SetCountEstimate(iterator.Count)
        for result as Match in iterator:
            coverart.AddThumb("http://www.images.google.com/images?q=tbn:"+result.Groups[3].Value+result.Groups[4].Value,result.Groups[7].Value,0,0,result.Groups[4].Valu
e)
static def GetResult(param):
        return "http://"+param

Upon further examination the amazon script does not use GetPage so the percieved performance boost must have been due to what I was searching for and the time of day. It also explains why it behaves differently than the other scripts with the new function.  So I guess I'm back to the useragent routine. What is the name parameter that the error is referring to?

CoverDownloader

Reply #140
I'm using the latest beta version (0.4 a) and I have a couple of suggestions, would it be possible to have the size of the cover either appear below the cover or show up in a tooltip when hovered. With the amazon covers especially some of them are very low quality and having to right click and preview them to see the size is a bit slow.

Also as someone else mentioned above the browser saves the art to the same folder as the track it's checking not in the folder you specify at the top.

CoverDownloader

Reply #141
I'm using the latest beta version (0.4 a) and I have a couple of suggestions, would it be possible to have the size of the cover either appear below the cover or show up in a tooltip when hovered. With the amazon covers especially some of them are very low quality and having to right click and preview them to see the size is a bit slow.

As discussed before, this is not possible without downloading all the full-sized images. However you can middle click instead of right clicking to preview directly.

Quote
Also as someone else mentioned above the browser saves the art to the same folder as the track it's checking not in the folder you specify at the top.


Are you sure? It seems to work properly for me. The way it is designed is that when you switch to another task the path selected in the 'Save To:' box will switch to what its value was when that track was added, so you have to select the path after changing task, if you want to save it elsewhere.

CoverDownloader

Reply #142
Hi, first off I just want to say how much I appreciate this tool - it's great!

You've mentioned a couple of times about not being able to show the full image size without downloading them, which makes sense. I'm not sure what your objection to downloading the full sized images is, though. Unless you are paying by the byte, I suspect the difference in size between a 10k medium and 30k large amazon image is hardly worth the bother.

Anyway, for those of us who don't mind the penalty of always downloading large images, here's a modified amazon boo script that will put the size of the large image either in the image itself, or in the label of the image (set the two constants near the top of the file to taste).

Code: [Select]
namespace CoverSources
import System.Xml
import System.Drawing
import util

class Amazon:
static AddSizeToImage = true //If true, this will add a caption to the top-left of the image with the size.
static AddSizeToLabel = false //If true, this will put the size in the label. Note that this will spoil bold higlighing of exact matches
static ThumbSize = Size(200, 200) //Size here should match thumbnail size in CoverDownloader settings for best results

static SourceName as string:
get: return "Amazon"
static SourceVersion as decimal:
get: return 0.2
static def GetThumbs(coverart,artist,album):
x=System.Xml.XmlDocument()
x.Load("http://xml.amazon.com/onca/xml3?f=xml&t=webservices-20&dev-t=1MV23E34ARMVYMBDZB02&type=lite&page=1&mode=music&KeywordSearch="+EncodeUrl(artist+" "+album))
results=x.GetElementsByTagName("Details")
coverart.SetCountEstimate(results.Count)
for node in results:
large = System.Drawing.Bitmap.FromStream(GetPageStream(node["ImageUrlLarge"].InnerText))
if large.Height>10:
caption = System.String.Format("{0} x {1}", large.Width, large.Height)

//Create the thumbnail.
thumb = Bitmap(large, ThumbSize)
large.Dispose()

//Caption the image
if AddSizeToImage:
g = Graphics.FromImage(thumb)
f = Font(SystemFonts.DefaultFont, FontStyle.Bold)
g.DrawString(caption, f, Brushes.White, 1,1 )
g.DrawString(caption, f, Brushes.Black, 0,0 )
f.Dispose()
g.Dispose()

label = node["ProductName"].InnerText
//Add the size to the label
if AddSizeToLabel:
label = System.String.Format("{0} ({1})", label, caption)

coverart.AddThumb(thumb,label,0,0,node["ImageUrlLarge"].InnerText)
static def GetResult(param):
return param


CoverDownloader

Reply #144
[quote name='david_dl' date='Aug 4 2006, 18:14' post='417963']
[quote name='unabatedshagie' post='417828' date='Aug 5 2006, 03:36'] I'm using the latest beta version (0.4 a) and I have a couple of suggestions, would it be possible to have the size of the cover either appear below the cover or show up in a tooltip when hovered. With the amazon covers especially some of them are very low quality and having to right click and preview them to see the size is a bit slow.
[/quote]

Where is the link to latest version?

Does it still do Buy.com? Usually Buy.com has bigger / higher resolution images than Amazon.com and more selection than Walmart.com

A while back Buy.com made a change which made it more difficult to download their bigger images manually.

A work around is not to left click on the small image, but right click, open in new window, you will get some IE error message. Go into the address bar and strip off everything right of the  .jpg and everything left of http and you will see the big image in your browser which you can download to your PC manually.
"J.J."

CoverDownloader

Reply #145
A work around is not to left click on the small image, but right click, open in new window, you will get some IE error message. Go into the address bar and strip off everything right of the  .jpg and everything left of http and you will see the big image in your browser which you can download to your PC manually.
Why would you have to do that? Right-clicking in Firefox I see that the URL of the shown image is identical to the URL in the JavaScript "Enlarge" link.

So I just right-click on the image, and choose "Save image as...", and it's the big image that gets saved, not the scaled-down version.

CoverDownloader

Reply #146
A work around is not to left click on the small image, but right click, open in new window, you will get some IE error message. Go into the address bar and strip off everything right of the  .jpg and everything left of http and you will see the big image in your browser which you can download to your PC manually.
Why would you have to do that? Right-clicking in Firefox I see that the URL of the shown image is identical to the URL in the JavaScript "Enlarge" link.

So I just right-click on the image, and choose "Save image as...", and it's the big image that gets saved, not the scaled-down version.


Wow, I think you are right. This will save me some time. I know they made a change and I thought it was to prevent people from downloading their images. I forget how things used to work. I think maybe just clicking on the image launched the large image in a new browser window. I thought they had put in some security & I had found a work around.

Anyhow their images seem bigger / higher rez than Amazon.com.

Have to resize them in Photoshop to 4.8" x 4.75" (I resize to 4.8 x 4.8 and then lop off the .05 with resize document or whatever.)

Thanks.
"J.J."

CoverDownloader

Reply #147
Anyhow their images seem bigger / higher rez than Amazon.com.
I found out that bigger does not always mean better. I got 2 album arts of the same CD, one in 300x300 the other in 500x500. The latter one has very visible yellowish tinge, ugh.

Have to resize them in Photoshop to 4.8" x 4.75" (I resize to 4.8 x 4.8 and then lop off the .05 with resize document or whatever.)
Why? Any reason for this?

CoverDownloader

Reply #148
[quote name='pepoluan' date='Aug 16 2006, 13:23' post='421378']
I found out that bigger does not always mean better. I got 2 album arts of the same CD, one in 300x300 the other in 500x500. The latter one has very visible yellowish tinge, ugh.

[quote name='directjj' post='420831' date='Aug 15 2006, 09:13']
Have to resize them in Photoshop to 4.8" x 4.75" (I resize to 4.8 x 4.8 and then lop off the .05 with resize document or whatever.)[/quote]

Some people have crummy scanners & some people don't know how to use their scanner.

For example CD Art is not a photo in the sense of a photographic print. Its not continuous tone but four color process. I find that if I do not turn on the descreening filter in the Epson Scan Utility the result will be speckled, and a lot of images scanned by individuals you get off the Internet are speckled like this.

Photoshop Elements Filter / Noise / Despeckle will eliminate some of this but I'm sure it was because the right settings were checked by the person scanning the art. Turning on the descreening filter in the Scanning software before you do any editing solves this problem.

In fact the higher the resolution I set the worse the problem becomse unless you check the descreening filter.

Another setting you can play with is the UnSharp mask, but I don't fool with it.

Once I have the photo from my own scan, or Buy.com, Amazon.com, or some other download (Usenet, P2P) I almost always tweak it in some way in Photoshop Elements before printing.

To save ink I will invert the image if it is white letters on a black background (back of the Jewel case). Some times I bump up the contrast or lighten the image. Often I lighten shadows. I'm printing on high quality plain paper (not coated) with a four color printer pigment based ink and lightening the image makes it brighter and probably saves a little ink too.

[quote name='pepoluan']Why? Any reason for this?
[/quote]

Even if I did not edit it in any other way, none of the photos from Buy.com, Amazon.com have the print dimensions equal to a CD Jewel Case.

E.G. Buy.com & Walmart I think are 6.944" X 6.944" and 72 dots per inch.

The Photoshop Print Dialog has a way to print the image a different size but I just resize the image.

I like a snug fit and in inches the Jewel Case Front is 4.75" high. I could resize to
4.75 x 4.75 but the Jewel Case can accomodate 4.8" wide. If I want to resample I could in one operation do a 4.8" x 4.75" but this is changing the proportionality a little and resampling, like stretching a rubber band to make it thinner.

Just reducing to 4.8 x 4.8 is going to lessen quality some and maybe I'm stupid but I think keeping the aspect ratio the same (no resampling), reducing to 4.8 x 4.8 and then "trimming .05 of either the top, or the bottom or .025 off both (depending on what the art is and what you are lopping off) is going to yield better quality.

Anyhow the 4.8" is now 104.167 dots per inch.

If I don't like the look of the Buy.com image, I will check Walmart, and finally Amazon.com.

A handful of Amazon .jpgs are the same size as Buy.com & Walmart 6.944" X 6.944" and 72 dots per inch. They may be moving in that direction. Usually in the past they are 72 dpi but at a dimension smaller than 4.8 x 4.8, so when you increase the size of the image you dilute the dpi and instead of ending up with 104dpi I think you end up with about 52 dpi. Sometimst the image is 6.944 with a big white border you have to get rid of.

I was suprized to see downloading the Buy.com picture is the same whether or not you enarlge it in the browser first. This doesn't seem to be the case with Amazon.

My test show once you are on the album page (not a search for artist page) downloading the relatively big photo with a right click does not get you  the same file as if you first enlarge by clicking on the image (opening a popup browser window) and then right clicking to download the picture.
"J.J."

CoverDownloader

Reply #149
[quote name='pepoluan' date='Aug 16 2006,
Once I have the photo from my own scan, or Buy.com, Amazon.com, or some other download (Usenet, P2P) I almost always tweak it in some way in Photoshop Elements before printing.


Just out of interest, why do you print album art? I thought generally the problem was that people have a hardcopy of the art (in their CD jewel case) and want a digital representation on their computer (so that they can associate it with the ripped tracks.) Downloading art off the internet is of course a lot easier than scanning it yourself.

The only type of person who would print album art at home that I can think of is a pirate, but I'm sure you have a legitimate reason