Speak now
Please Wait Image Converting Into Text...
Embark on a journey of knowledge! Take the quiz and earn valuable credits.
Challenge yourself and boost your learning! Start the quiz now to earn credits.
Unlock your potential! Begin the quiz, answer questions, and accumulate credits along the way.
General Tech Learning Aids/Tools 2 years ago
Posted on 16 Aug 2022, this text provides information on Learning Aids/Tools related to General Tech. Please note that while accuracy is prioritized, the data presented might not be entirely correct or up-to-date. This information is offered for general knowledge and informational purposes only, and should not be considered as a substitute for professional advice.
Turn Your Knowledge into Earnings.
I have a large very sparse matrix with 1000 columns and 15000 rows. It mainly contains zeros, the rest is integer values from 1-8.
I'm limited to scikit-learn and none of the PCA implementations there would process sparse matrices (not even RandomizedPCA). I tried LDA and found a value of n_components=870 to be optimal, but this worsened my predictions on the test set.
scikit-learn
I'm using LinearSVC as my learning algorithm as I get the best results with it. It performs better than random forests or xgb.
LinearSVC
The second problem is, I'm in a multiclass environment with 3 classes">classes that I have to predict: 0,1,2.
However, the classes">classes are extremely unbalanced, 0 is the dominating class and I only have few 1s and 2s. (less than 100).
I'm using the class_weight ='auto' argument, is that correct?
Any advice on the preprocessing and improving my predictions would be helpful.
The standard statistical prescription is to increase the number of replicates to boost the sparseness of occurrence of 1s and 2s. This is expensive and wasteful not to mention that, in your case, it's likely not even possible.
Gary King, the Harvard quantitative political scientist, has an article about this: “Logistic Regression in Rare Events Data.” Political Analysis 9: 137–163. Here's the abstract to that article:
"We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros ("nonevents"). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all variable events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed."
But there are other academics who recommend using a poisson model instead of LR since it is intended for use with rare event, integer data. For instance, see Fader and Hardie, Probability Models for Customer-Base Analysis which is marketing focused but generalizeable to your area of application.
The extensions to machine learning applications is immediate, imho, assuming the issue isn't treated as a non-human aided problem. Spending some time developing these workarounds should lead to their "automatization* in an ML algorithm.
No matter what stage you're at in your education or career, TuteeHub will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.
General Tech 10 Answers
General Tech 7 Answers
General Tech 3 Answers
General Tech 9 Answers
General Tech 2 Answers
Ready to take your education and career to the next level? Register today and join our growing community of learners and professionals.