An automatic analysis of product reviews requires deep understanding of the natural language text by machine. The limitation of bag-of-words (BoW) model is that a large amount of word relation information from the original sentence is lost and the word order is ignored. Higher-order-N-grams also fail to capture the long-range dependency relations and word order information. To address these issues, syntactic features extracted from the dependency relations can be used for machine learning based document-level sentiment classification. Generalization of syntactic dependency features and negation handling is used to achieve more accurate classification. Further to reduce the huge dimensionality of the feature space, feature selection methods based on information gain (IG) and weighted frequency and odds (WFO) are used. A supervised feature weighting scheme called delta term frequency-inverse document frequency (TF-IDF) is also employed to boost the importance of discriminative features using the observed uneven distribution of features between the two classes. Experimental results show the effectiveness of generalized syntactic dependency features over standard features for sentiment classification using Boolean multinomial naive Bayes (BMNB) classiﬁer.