This paper considers online reputation and polling systems where individuals make recommendations based on their private observations and recommendations of friends. Such interaction of individuals and their social influence is modeled as social learning on a directed acyclic graph. Data incest (misinformation propagation) occurs due to unintentional reuse of identical actions in the formation of public belief in social learning; the information gathered by each agent is mistakenly considered to be independent. This results in overconfidence and bias in estimates of the state. Necessary and sufficient conditions are given on the structure of information exchange graph to mitigate data incest. Incest removal algorithms are presented. Experimental results on human subjects are presented to illustrate the effect of social influence and data incest on decision-making. These experimental results indicate that social learning protocols require careful design to handle and mitigate data incest. The incest removal algorithms are illustrated in an expectation polling system where participants in a poll respond with a summary of their friends’ beliefs. Finally, the principle of revealed preferences arising in microeconomics theory is used to parse Twitter datasets to determine if social sensors are utility maximizers and then determine their utility functions.