Journal Impact Factors are measurements of the average number of citations an article in a given journal receives in a specific period of time. There are several websites that provide journal impact factors, the most established of these is Journal Citation Reports (JCR) by Clarivate Analytics. Other respected sources include SCImago, Google Scholar, and SCOPUS, and each has it's own method of measurement. Because JCR and SCOPUS are both costly subscription services, those without access to their data may be inclined to look at other measurements of impact factor. Be forewarned that there are not only predatory journals in the world but also suspicious providers of impact factors that might lead you to those journals. Read a summary list of some of these providers in Misleading Metrics (compiled by Beall's List).
Journal impact factors have traditionally been accepted by administrators, funders, and fellow researchers as the standard for measuring the quality of a journal but there have also been many studies challenging their validity. For instance, impact factors are based on mean (or average) numbers of citations though that may hide the fact that a few highly cited articles will skew the rating.
There are also several forms of scholarship for which journal impact factors don't apply, such as monographs, blogs, conference presentations, and white papers. Methods for measuring the impact of these kinds of outputs are referred to as Alt-Metrics. Alt-Metrics are a relatively recent development in the area of scholarly communication. The major sites curating this data include ImpactStory and CitedIn. Citations in PLoS and SCOPUS also provide links to alt-metric data.
Highly recommended for learning more about traditional and alt-metrics: Scholarly Research Impact Metrics Subject Guide