# Interesting submissions with scores?

« Prev
Topic
» Next
Topic
<12>
 Jeff Moser Kaggle Admin Posts 356 Thanks 178 Joined 21 Aug '10 Email user Allan Engelhardt wrote: Hmm, Jeff Moser edited that after I posted so it now makes no sense at all.  Let me try again and see if Jeff can keep his fingers off the edit button this time....: Sorry about that.. your post has shown me that I need to tweak how inline MathJax is rendered. That's what I was experimenting with. Currently only displaymode math works (i.e. math surrounded by double dollar signs on each side). Displaymode puts equations on a separate line which looks odd. I'd like to get inline math to work as well (i.e. with single dollar sign delimeters). The problem is that some programming languages use dollar signs and confuse MathJax. Just wanted to give an update on what I was doing. #16 / Posted 2 years ago
 Posts 21 Thanks 9 Joined 9 Sep '10 Email user Uri was absolutely right on my error.  I now get 0.189941 like Allan.   More significantly for me, what Uri's correction made me realize was that I had been mistakenly producing predictions in the log domain rather than the real domain.  In other words, I have been submitting files with predictions for log(D+1) rather than predictions for D.  It certainly helped me to fix that problem, although not as much as you might have thought - my leaderboard score improved by only 0.001. #17 / Posted 2 years ago
 Posts 77 Thanks 29 Joined 28 May '10 Email user @Jeff Moser: No problem, thanks.  And don't worry about \mean; I have #+LATEX_HEADER: \usepackage[fleqn]{amsmath}#+LATEX_HEADER: \DeclareMathOperator \mean {mean} in my Org-Mode headers. #18 / Posted 2 years ago
 Jeff Moser Kaggle Admin Posts 356 Thanks 178 Joined 21 Aug '10 Email user More details on making beautiful math posts in these forums can be found at http://www.kaggle.com/forums/t/581/tips-for-beautiful-math-posts Thanked by inf2207 #19 / Posted 24 months ago
 Posts 77 Thanks 29 Joined 28 May '10 Email user Eric Jackson wrote: More significantly for me, what Uri's correction made me realize was that I had been mistakenly producing predictions in the log domain rather than the real domain.  In other words, I have been submitting files with predictions for log(D+1) rather than predictions for D.  It certainly helped me to fix that problem, although not as much as you might have thought - my leaderboard score improved by only 0.001. For small numbers $\log(1+x) \approx x$ so your score woudn’t change much.  For example, $\log(1+0.189941) = 0.1739037$. #20 / Posted 24 months ago
 Posts 8 Thanks 5 Joined 20 Mar '11 Email user I so far only worked with with y2 data. With that and a constant zero prediction I get close to 0.522 as the error, quite a surprise when watching the leaderboard! I sampled prediction vectors directly from the Y2 values and got a mean of 0.69. Then I plotted RMSE versus mean squared error of simulated predictions and found a correlation (duh). Though the math isn't completely clear to me I think optimizing the model for MSE rather than RMSE will yield suboptimal results... I also simulated predictions that are exactly right with some percentage(0..100%) and sampled from a uniform (0..15) distribution otherwise. Under this model, you need to get 95% of all predictions exactly right to be better than the current leaders. #21 / Posted 23 months ago
 Posts 77 Thanks 29 Joined 28 May '10 Email user A simple linear model on Sex and AgeAtFirstClaim gives a public score of 0.478118 http://www.heritagehealthprize.com/c/hhp/forums/t/648/cases-missing-sex-and-age-code-in-release-3/4217#post4217 Thanked by Chris Raimondi #22 / Posted 23 months ago
<12>