Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Nyzek

1
Quote from: ochosi on May 28, 2010, 12:00:51
i would suggest to make an action to convert the already-gathered play/skipcount to a rating. this way users wouldn't have to start from zero when starting to use your plugin. also: it would be good to not only make this a first-run option, because this way people could test different differentials etc for rating.

I'm still working on the model for this.  Most likely we'll get an option for unrated songs to be given a rating based on your EpicRating rules calculated with play/skip counts.  I don't know if there is a real scenario where you would want this to modify songs that already have a rating (let me know if you have one!).

D
2
Quote from: Nyzek on March 22, 2010, 22:11:02
Now for the fun part  ;)  Most likely the normalization will happen after a song finishes playing or is skipped, after rating modifications are provided.  At the moment I'm trying to get some data from a smaller library to decide on how to model the normalization.  My goal is that eventually the ratings modified by normalization and play/skip etc can be a good indicator of what songs are currently most appreciated.  I'm also looking at tracking related information, like highest rating achieved, rating decay rate, and probably a few others for individual songs, and attention spread over [period] for libraries.  I'd be using these for various custom Weighted Random systems.

Ok, so I've been working on my datasets, and I'm quite happy with the data model.  The current theory is that after EpicRating changes a rating (it should not be done after a manual rating change) that calculations will be done to adjust ALL ratings.  This will very likely be done with some fairly simple calculations (mean, std dev, z-score, percentile re-conversion).

The issue is that songs that hold the same rating have nothing to distinguish them from each other, so the results aren't as spread as they ideally would be for a sustainable rating system.  The conclusion that I've reached is that statistical diversity needs to be introduced, and done in such a way as to keep the differences meaningful without them being large. (call it RatingScore?)

At the moment that would likely mean using a meaningless measure rule 2 (below) to resolve rating-space conflicts.  My hope is that once GMB starts keeping a log of activities, the rating normalization can pull some information from logs to resolve the issue.

Example:
Let's say 5 songs have a rating of 75.  In order to place them properly, ideally we want them each to have a different number.  If GMB does not have a log of events, then any statistical spread would need to be introduced through the Most Recently Played (say, have it add some small decimal number based on how recently it was played).  This will not resolve ALL conflicts, but should resolve most of them.

The most meaningful resolution would be if GMB does keep a log.  Current theoretical model:

1) If multiple songs are rated "75" -> Check to see if they have a "trend" (previous rating change).  If they have a positive trend history (song used to be rated lower, is now rated higher), add 0.1 to the Rating Score.  If they have a negative trend history, subtract 0.1 from it.

In plotting how to change a rating, previous rating changes are the most relevant modifier available.

2) If there are still multiple songs with the same rating score (which means they share the same trend/lack of trend with the other "conflicts), decide the matter by adding 0.09 to the most recently played, 0.089 to the next most recent, etc, until conflicts are resolved.

3) If there are STILL songs with conflicts, it means they have no trend and have not been played (IE, rating was set manually, song was not played/skipped) then add 0.005 to the song that has been modified (in any way) the most recently, 0.0049 to the next, etc.

4) If there remains any conflict, it means that the songs in question have been set manually as a group (modified at the same time) and have never been played/skipped.  In this case, add 0.0001, 0.0002, etc based on some arbitrary value (hash of song?  file name?)

What this will do is create a set of rating scores that are not shared by any 2 songs, allowing us to take full advantage of the z-score normalization.  Small differences (decimal) will have a minor impact, and the more/longer this system is used, the less later rules will be called on to resolve conflicts.

Hopefully we'll be getting the non-log version of this up and running soon, and get in some actual tests to make sure that it follows the data models.  This has been tested on a library of 10k songs in the following play configurations:  random, weighted random, filtered to play specific range of rating, filtered to groups of artists, filtered to specific album(s)/artist(s), and a few others.  I'm keeping my fingers crossed.  Hopefully some good news soon!

Dan
3
Thanks!
gmb is the best music player I've ever found by leaps and bounds, and I'm slowly picking up converts :)

Quote from: Quentin Sculo on March 21, 2010, 22:45:21
- check if the rating fall below 0

- default
Songs::Get($ID, 'rating'); returns "" for default, which is different than 0 which really means 0
(I know it makes things a bit more complicated, but I like being able to set a song rating to 0)
when doing if(!$song_rating) it will be true in both cases, so you need to do if($song_rating eq "") instead.



I fully agree about the use of a rating of 0  :)  I figure that we'll put in place a system where it sees if the rating change will put the value below 0 it will just set it to 0, same for the high end.

Quote from: Quentin Sculo on March 21, 2010, 22:45:21
- making the field "rating" an option would be nice in the future when users will be able to add custom fields of type "rating", of course $::Options{"DefaultRating"} will have to change then (probably to $::Options{fields}{rating}{default}) so it can wait.

I expected as much, and love the custom rating field idea :D

Quote from: Quentin Sculo on March 21, 2010, 22:45:21

Quote from: Nyzek on March 21, 2010, 04:49:34
-better customization via GUI (perhaps an interface similar to the editing function of Weighted Random)
Do you mean with different saved profiles ?

That's a fine idea, as well!

I'm expecting that we will be putting in options so that users can create custom rating modification rules, likely with an interface similar to the Edit option for the Weighted Random play order.

Quote from: Quentin Sculo on March 21, 2010, 22:45:21
Quote from: Nyzek on March 21, 2010, 04:49:34
-Normalization project (system to auto-adjust all ratings to fit into a specified rating distribution to maintain statistical "meaning" in large libraries.  Intended for use with customized Weighted Random systems.)
When do you want this normalization to occur ? each time a rating is changed, periodically, on demand ?

Now for the fun part  ;)  Most likely the normalization will happen after a song finishes playing or is skipped, after rating modifications are provided.  At the moment I'm trying to get some data from a smaller library to decide on how to model the normalization.  My goal is that eventually the ratings modified by normalization and play/skip etc can be a good indicator of what songs are currently most appreciated.  I'm also looking at tracking related information, like highest rating achieved, rating decay rate, and probably a few others for individual songs, and attention spread over [period] for libraries.  I'd be using these for various custom Weighted Random systems.
4
Orospakr and I have been working on implementing this as a plugin as part of a larger project.

Github page:

http://github.com/orospakr/gmusicbrowser-epicrating

Direct download of plugin:
http://github.com/orospakr/gmusicbrowser-epicrating/raw/epicrating/plugins/epicrating.pm

Currently this project has the following working:

-add/remove X amount of rating to on a fully-listened song
-add/remove X amount of rating to a skipped song
-set X number of seconds "grace" period on the rating change for skipping a song
-add/remove X amount of rating on songs skipped within grace period

-checkbox option for allowing the plugin to change the rating away from "default" when a song is heard
-checkbox option for allowing the plugin to change the rating away from "default" when a song is skipped*

*Notes:  If a song is set to "default" and is skipped, the default value is put in place, and no further rating changes are done, as long as the checkbox is enabled.  If the checkbox is not enabled, the rating will stay at "default", and changes will only be made to songs that have a rating already.

ToDo:

-better customization via GUI (perhaps an interface similar to the editing function of Weighted Random)
-Normalization project (system to auto-adjust all ratings to fit into a specified rating distribution to maintain statistical "meaning" in large libraries.  Intended for use with customized Weighted Random systems.)

We look forward to all feedback.
5
Suggestions / Re: Save rating into a file.
March 14, 2010, 06:31:16
Quote from: Quentin Sculo on March 07, 2010, 16:17:53
thanks, though I was mostly interested in tags saving a "rating", it's an interesting list.
It seems that none of them read or write a rating tag, or at least not without a custom configuration or a plugin.

I doubt I'll be using the rating tag from the id3v2 standard, because
- it's only for mp3 anyway
- 1-255 doesn't fit gmb's system, 0 is explicitly not allowed, and I need it for gmb, 0 is not "default" as I like the possibility to set a rating of 0 for a song.
- it doesn't seem to be used by most players anyway even by those saving a rating.

I think I'll start by using a gmb-specific tag (gmb_rating), though if anyone knows of an app that save ratings in a tag in format similar to gmb (0-100), maybe I could use that tag instead of creating my own.

I've actually been wishing for a system that allows me to put ratings on any arbitrary tag on a song.  I adore gmb's flexibility, and particularly the ability to set such a wide range of ratings.  I think a system to put ratings on individual tags is a massive improvement to lending context, as a song you really like for biking might not get the same rating for a song you would want to wake-up to.

I've been thinking on how I would want such a system to work for me to want to use it, and would be very interested to hear how other people who want this feature would see it made.