Planet Four Talk

Losing Images While Classifying - RATS!

  • jshoe by jshoe

    Rats! Just lost another complex image while classifying it! 2hrs work and still only halfway through, (don't know what APF ref # is so can't reference it (b'cause we are not given APF ref. until after Finished is hit). An extremely complex field of at least 150 to 200 little fans, with only about 1/3 actually marked thus far - but willing to do it today. Maybe I'm the only sucker willing to classify such a monster, but my recent study of blue streaks (they say CO2 Frost) helps me see subtle clues about wind direction that are easy to miss, and that knowledge showed that this field had at least 2 events and 2 or more sheets of ice operating independently in time or space. I was willing to take the time to classify it all because I feel it is important, and I had said as much in several comments over the last week or so. But, over 2 hrs into the task, I clicked my Magic Mouse while it was a few mm out of place on my zoomed-in screen - this activated the 'Back' command and everything disappeared - apparently unrecoverable - at least to me. So, this important field and what it might reveal may never get done so carefully, or by anyone willing to search for the tiny clues about wind shifts that I saw by close inspection. This is the 4th time it has happened, and even though I was being extra careful I still clicked the mouse just as I was thinking I ought to shift its position ever so slightly just to be safe. No one is perfect, so it would be much appreciated if the science/programming team decided to trust us enough to get back to such images and finish the work when we make such mistakes.
    Are these partially classified images recoverable, including all the work and fans already marked, or is this effort just lost? I may give in again, but I may never choose to really analyze such a complex image again. A pity, because I have taken a lot of personal time to figure out what I am seeing on screen and apply this knowledge to future images, but I cannot expect myself to ever be careful than I was this morning.

    Posted

  • wassock by wassock moderator in response to jshoe's comment.

    Don't know if what you lost is recoverable, we'll need to get Meg or Anya's input on that (and I suspect they'll need to enquire further). But it might be an idea to ask the tech team if it's possible to have a "rest" button which lets you park the image and come back to it later rather than have to complete it in one sitting.

    Posted

  • Kitharode by Kitharode moderator

    That's tough luck jshoe and you have my sympathies. I've lost one or two 'hanging' images, but nothing like two hours work. Hope it doesn't drive you away from here. 😉

    Good idea wassock. A 'rest' button would be a great addition.

    Posted

  • angi60 by angi60

    I'm still getting the problem where if there's a huge number of fans or blotches, I'll get three quarters through marking them, then the screen freezes, and there's nothing I can do. SOOOO FRUSTRATING!! It also seems to upset my browser for the rest of the session. I know this has been raised before though. I agree with the idea for a 'rest' button too.

    Posted

  • mschwamb by mschwamb scientist, translator

    Hi,
    I'm sorry that happened. There's a memory leak in the drawing library that causes the browser to crash after a very large number of fans. The development team is working on a long term fix. We are looking into ways of removing these images (and perhaps cutting them smaller so that they can be marked).

    Unfortunately if you don't hit the submit button, the classification isn't stored. The reason for that is that we want your first reaction unbiased by other people's opinions. Actually it's found currently that classifications from multiple nonexperts do worse when they work together. If you keep them independent and get their assessment then by combining their answers you get a response that is as accurate or exceeds that of an expert. So this is why we don't you to half classify an image and go back to it, because you might read find the image in a Talk conversation before you're finished.

    We're a small science team and the Zooniverse development team is small and running many projects (and building new ones) so it will take some time to sort out the memory leak bug unfortunately. Sorry for the inconvenience and frustration. I would say mark at least a few fans in the images, so we can there is something there if you can.

    I'm going to talk to the science team to see if based on Talk hashtags (#blotchfield and #fanfields ) we can get some of these images out of rotation.

    Best,

    ~Meg

    Posted

  • angi60 by angi60 in response to mschwamb's comment.

    Hi Meg, thanks for your response. Yes I thought you said the issue was being worked on, and I appreciate you're all doing your best with limited time constraints, staff (and budget no doubt!). It's at it's most frustrating when there are huge numbers of tiny fans, which are quite difficult to mark anyway, without the added insult of the image freezing! So cutting down those images to a smaller size might help.

    Thanks alot.

    Posted

  • wassock by wassock moderator

    Meg, in my spare time I have been having a play with the "map the sea floor" project. When comes time to look at the next stage of this project and the tools required can I suggest that someone looks at what they use there. I'm found them very easy to use and they are measuring similar shapes-ish

    Posted

  • mschwamb by mschwamb scientist, translator in response to wassock's comment.

    If you're referring to Seafloor Explorer - My understanding is that it's the same tool set underneath and drawing library slightly modified. Same set of developers for the most part. The only difference is that the starting point,width and angle and length of the fans matter to us with Planet Four so it was decided by the the science team and Zooniverse development team that they wanted a triangle shape to use.

    ~Meg

    Posted

  • mschwamb by mschwamb scientist, translator in response to angi60's comment.

    I appreciate the patience. We don't which of the images right now have that many blotches and fans, so it's hard for us to remove them. We don't even know which images have fans and blotches in them, that's something we need your classifications for. One of those things you learn post launch. We're starting to take a look at the classification database now and taking the first steps at analyzing the classifications and understanding what we have. I will try to talk to the team about this in a week or so.

    Cheers,

    ~Meg

    Posted

  • JellyMonster by JellyMonster in response to mschwamb's comment.

    Meg, on the images that have many fans and blotches, could users just mark a few of them (say, a sample of wind directions), or will this be of no use to you?

    Posted

  • mschwamb by mschwamb scientist, translator in response to JellyMonster's comment.

    Yes, at least we would know something is in there versus that it's an empty image. Please do try and mark as many as you can that won't crash your browser.

    Cheers,
    ~Meg

    Posted

  • wassock by wassock moderator in response to mschwamb's comment.

    I was mainly looking at the scallop tool which takes 2 measurements length and widest point - which is effectively what the fan tool measures. It just feels easier doing it as 2 separate measurements and it covers the instance where part of the thing is off screen (it asks if > half of it is in). The fan tool makes the wider end downwind of the vent buy default so I'm thinking you'd get the same measurements,but different tool may mean future date wouldn't be directly comparable with this lot

    Posted

  • wassock by wassock moderator in response to mschwamb's comment.

    Meg, from a few posts back:

    "Unfortunately if you don't hit the submit button, the classification isn't stored. The reason for that is that we want your first reaction unbiased by other people's opinions. Actually it's found currently that classifications from multiple nonexperts do worse when they work together. If you keep them independent and get their assessment then by combining their answers you get a response that is as accurate or exceeds that of an expert. So this is why we don't you to half classify an image and go back to it, because you might read find the image in a Talk conversation before you're finished."

    2 things - this infers that us all having "how would you classify this?" type discussions is detrimental to the overall data, or at least will modify it over time. 2nd Have you considered introducing a set of "repeat images" which get slipped into the mix for "experienced" (whatever that means) markers. That way you have a control set which you could use to gauge how performance changes with time (if at all). My suspicion is that whilst the mean values may stay the sameish the standard deviation of the measurements will change. Could be they get tighter as we all get better at it or conversely they could widen as we start to take less care than we did when new to it. Suspect that a tighter SD would be less likely to give the "Right" answer (or the same one we got first time around least ways).

    Posted

  • mschwamb by mschwamb scientist, translator in response to wassock's comment.

    To answer your first question- yes. Studies have been shown that people working together on citizen science projects actively communicating the accuracy goes down. The better responses were when the person gave their first response unbiased. Also it's a more uniform sample because if you look at Talk and read the discussions for an image while classify it and someone else doesn't that it's another variable that then has to be quantified. It's mainly having the how would classify this conversation while classifying that appears to be not as good as independent response. The Zooniverse and other groups are looking at ways to make user interaction useful and give good/better classification responses.

    Also the same thing with user weighting schemes - telling people how well they did compared to the average vote for example caused those that do well to perform worse and those disagreeing to agree more often. So it's one of the reasons that we don't share that information on individual volunteers because it changes their behavior (and not always for the better)

    As to your second question - I'm not sure repeat images are useful, but there are ways of quantifying this and some analysis weighting schemes being currently developed for citizen science datasets do take into account volunteers change and learn. Since there are a large fraction of new volunteers who come and do a few and leave daily I think a lot of that washes out in terms of evolution on the broad scale, but looking at individual users and their responses over time i do think would improve the statistics and the results when combining the classifications for a given cutout.

    ~Meg

    Posted

  • p.titchin by p.titchin

    also just had the annoyance of having a busy image go unresponsive.About threequarters done, so i sat on and kept stopping my PC from closing the page.Frustrating, but after about 15 minutes, I was able to get the page to accept the 'finished' command, so hopefully what I had done was saved.

    Posted

  • angi60 by angi60 in response to p.titchin's comment.

    Not sure if this will help, but I was having the same problem, using Internet Explorer as a browser. I've now changed to Google Chrome, and haven't had any problems since. 😃

    Posted

  • mschwamb by mschwamb scientist, translator in response to p.titchin's comment.

    Hi p.titchin and angi60,
    That's the memory leak I was referring to above. Chrome uses a little less memory on your computer than FireFox of IE9. There's a long term fix in the works to replace the drawing library to fix this. Sorry for the frustration and inconvenience.

    ~Meg

    Posted

  • angi60 by angi60 in response to mschwamb's comment.

    Thanks Meg 😃 No problem. It takes more than that to get rid of me!

    Posted

  • p.titchin by p.titchin in response to mschwamb's comment.

    Thanks Meg, Its just trying to pick the moment to finish and 'save' before the crash! I'm getting better at spotting the warning signs!
    Pete

    Posted