Skip to main content
  • Original research article
  • Open access
  • Published:

Primary care physicians’ perceptions of Israel’s national program for quality indicators in community healthcare– 2010 and 2020

Abstract

Background

Monitoring the quality of primary care is essential for improving healthcare services. The National Program for Quality Indicators in Community Healthcare measures various aspects of healthcare quality. A 2010 survey among Israeli primary care physicians (PCPs) found widespread support for the program alongside concerns about its effects on workload and competitiveness. This study assessed the extent to which PCPs’ perceptions had changed between 2010 and 2020.

Methods

Cross-sectional survey on PCP’s experience with the quality monitoring effort at their health maintenance organizations were conducted in 2010 and 2020 among representative samples of PCPs. Bivariate analysis examined whether the study variables varied between the timepoints. Logistic regression models evaluated the extent to which the participants’ characteristics and perceptions contribute to their attitudes toward the program.

Results

The study sample comprised 605 physicians in 2010 and 450 physicians in 2020. Overall, support for the National Program for Quality Indicators was high in both surveys. However, between 2010 and 2020 some decrease in the support for the use of quality indicators was observed among PCPs The greatest decrease in support between 2010 and 2020 was observed in the proportion of respondents who perceived that it is important to a great or very great extent to measure the clinical performance of some quality indicators (88% versus 81%) and in the proportion of respondents who perceived that monitoring contributed to improvement (66% versus 60%). Over half of respondents (58%) perceived to a large or very large extent that the program was associated with increased workload compared to 63% in 2010. Similar proportions of respondents in 2010 and 2020 felt that the program was also associated to a large or very large extent with over-competition (47% and 48%, respectively) and excess managerial pressure (58% and 60%, respectively).

Conclusions

The study indicates that while support for the program in general remains high, it continues to have undesirable side effects. Further use of the program for quality indicators must consider the shortcomings voiced in 2010 which have remained uncorrected as reflected in the results of the 2020 survey: extreme managerial pressures, increased workload and over-competitiveness.

Background

Healthcare quality indicators are used in many countries to assess, and ultimately improve the quality of healthcare services [1,2,3,4,5]. The National Program for Quality Indicators in Community Healthcare, which was established in Israel in 2000, collects data on 73 indicators provided in the community, most of which are process indicators (https://www.israelhealthindicators.org/). The program relies on the voluntary participation of Israel’s four public health maintenance organizations (HMOs) [6,7,8], and mainly uses large-scale computerized databases maintained by these HMOs, to assess the quality of selected services [9]. The program currently covers nine areas of measurement (health promotion, cancer screening, child and adolescent health, adults over 65, respiratory diseases, cardiovascular health, diabetes, antibiotic treatment, and mental health). The data enables ongoing, continuous, and dynamic monitoring and provides information to policymakers and the public [2, 5,6,7]. Among the four HMOs, at least the two largest ones, covering about 80% of the population, have added additional internal indicators. However, to the best of our knowledge, physicians are not aware which indicators are National Program ones, and which are internal.

Since the inception of the program, studies have indicated improvement in several quality indicators. For example, increased immunization rates among the elderly and improved healthcare of the elderly population [10], a growth in the use of community-based healthcare services [11], and a significant increase in screening for breast cancer and colorectal cancer [12]. Furthermore, longitudinal adherence to quality indicators in diabetes care was found to be associated with reduced risk of cardiac morbidity [13].

The opinion of healthcare providers, and their stance regarding quality monitoring programs have a major effect on the success of such programs [6]. Healthcare providers in the United Kingdom have consistently expressed a positive view of quality monitoring programs, both in hospitals and in the community [14, 15]. A similar approach has been voiced by healthcare providers in Israel [16, 17]. In 2010, some of the researchers involved in the present study, conducted a survey among PCPs working in the four HMOs regarding their experience with the quality monitoring effort. The results of the survey showed that most respondents (87%) felt that quality monitoring by indicators was important and many of them (72%) supported the program’s continuation [17]. On the other hand, 60% noted they felt extreme managerial pressures due to the program, and 65% mentioned that they must cope with increased workloads. 40% of the respondents criticized the over-competitiveness generated by the program [17]. In the years following the 2010 study, changes were made to the National Program for Quality Indicators in Community Healthcare. These included the publication of comparisons of indicators among the four HMOs. To alleviate the burden on physicians, the number of national indicators that measure physician performance, which may increase the competition among HMOs, were reduced. HMOs continued to collect internal metrics on physician performance [personal discussion with HMO managers]. These changes called for reconsidering the implications of the program and reevaluating the viewpoints of healthcare providers on the issue, including their extent of support of the program. To that end, we conducted a second survey in 2020 to examine the views of PCPs on the program.

Methods

Setting and participants

Cross-sectional surveys were conducted in 2010 and 2020 among representative samples of PCPs working in the four public HMOs. The results of the 2010 survey were previously published [17].

The study population consisted of PCPs working for the HMOs (full- or part-time, salaried or self-employed contractors) engaged in the direct care of adult patients. Physicians with no responsibility for the quality of care for a panel of patients (i.e., consultants, physicians engaged mainly in administrative or managerial work, retired physicians, and temporary replacements) were excluded from the study population. The study team estimated that approximately 4400 Israeli physicians met these criteria. For each survey, each HMO provided the contact details of a sample of PCPs randomly selected from their administrative records. After receiving the lists from the HMOs comprising 1000 PCPs in 2010 and 896 PCPs in 2020, the eligibility criteria were checked again and PCPs who did not fulfill them were excluded. This resulted in 804 PCPs who met the criteria and were approached in 2010 and 725 PCPs who met the criteria and were approached in 2020.

The study was approved by Myers-JDC-Brookdale’s institutional ethics committee (approval number IRB-BH-261). All participants provided their consent to participate in the study and were assured anonymity.

Study questionnaire

The development of the study questionnaire for the 2010 survey was previously described [17]. The development process included an internal validation, a pilot study and revision following comments received in the pilot study. The same questionnaire was used in the 2020 survey. In total, the study questionnaire comprised 105 questions, of which 22 were open-ended.

The questionnaire addressed physicians’ experience with the quality monitoring effort at their HMOs. The main topics of the questionnaires included PCPs’ experiences with the program; their perceptions about the quality indicators and definitions; their assessment of its impact on their work, patient care, and their relationship with their patients, their colleagues, and their health plans; their difficulties and concerns regarding the program; suggestions for improving the program; use of the information gathered through the program; satisfaction with the program and desires regarding its future. Information on respondents’ personal and professional characteristics was also collected.

Responses to each study item were provided on a six-point scale ranging from 1 (very little or not at all) to 6 (to a very large extent).

Data collection

The 2010 survey took place between August and December 2010. The 2020 survey took place between December 2019 and February 2020. Sampled PCPs were approached by email (2020) or regular post (2010) and had an opportunity to respond either by telephone, regular post, or fax (in 2010) or by email or telephone (in 2020). Designated respondents received up to four reminders by telephone or email. In 2010, most respondents (85.7%) replied by phone, regular post or fax, whereas 14.3% of respondents replied by email. In 2020, most participants (81.5%) replied by email, and the rest (18.5%) replied by other means (p < 0.001 for the difference between the survey years).

Response rate

Only questionnaires with complete information on demographic and professional characteristics were included in the data analysis. Of 804 PCPs who met the eligibility criteria and were approached for participation in 2010, 605 (75.2%) provided complete questionnaires. Of 725 PCPs approached in 2020, 450 (62.1%) provided complete questionnaires, including information on demographic and professional characteristics.

Data analysis

Analyses were conducted using the Statistical Package for the Social Sciences (SPSS), version 24 (IBM, Armonk, NY, USA).

Non-responses to the closed-ended questions were treated as missing values. Variables were compared by chi-square test.

Bivariate analysis was performed to examine whether the study variables varied across key subgroups of PCPs and between and between the two survey time points. Logistic regression models were performed to assess the extent to which the participants’ characteristics and perceptions contribute to their attitudes toward the monitoring program. The results were presented as odds ratio with 95% confidence interval (CI) [18].

The data were weighted to reflect the differences among the HMOs in their size (i.e., the number of PCPs working at each HMO) and response rates (HMO-specific response rates ranged from 59 to 67%) so that the results would more accurately reflect the national study population. The weighting also considered the relationship between the sampling probability and the number of HMOs in which each PCP worked (i.e., a PCP working for two HMOs was more likely to be included in the sample than a PCP working for only HMO). All statistical analyses were conducted by using the weights.

As not all respondents provided all answers to all questions, some regression models included less than 1055 responses. There was no imputation of missing values.

P-value < 0.05 was considered statistically significant.

Results

Demographic and professional characteristics of the study sample

Comparison of respondent characteristics by survey year showed differences in age categories, country of birth, medical specialty, type of employment and main type of practice (Table 1). A larger percentage of the study population in 2020 was over 45 years of age compared to 2010, and a larger percentage was non-Jewish. Additionally, in 2020, a greater proportion of internal doctors/other specialists and a smaller proportion of family physicians and non-board-certified physicians comprised the study population compared to 2010. At each timepoints, a similar proportion of respondents were specialists in family medicine and physicians without board certification. Specialists in internal medicine and other fields who work as PCPs comprised about a fifth of the study population. In 2010 about a quarter of the respondents worked as independent physicians, in 2020 this type of employment was reported by over a third of the respondents.

Table 1 Demographic and professional characteristics of the study population by survey year (%)

PCPs’ perceptions on the National program and the quality indicators collected

Overall, support for the National Program for Quality Indicators was high at both time points surveyed. However, in 2020, these proportions statistically significantly decreased compared to 2010. Most physicians perceived that monitoring clinical performance is important to very important, with a statistically significant lower proportion of physicians agreeing with this statement in 2020 compared to 2010 (88% versus 81%, p = 0.03). Two-thirds of physicians (66%) surveyed in 2010 compared to 60% of physicians surveyed in 2020 thought that monitoring contributes to improved quality to a great or very great extent, but this difference was not statistically significant (NS). A higher proportion of physicians surveyed in 2010 compared to 2020 expressed their support for continuing the program (73% versus 65%, p = 0.03). A higher proportion of physicians surveyed in 2010 compared 2020 perceived that the program increases workload to a great or very great extent (63% versus 58%, NS). Less than half of physicians replied that the program affected their satisfaction with their job to a great or very great extent, with a statistically significantly higher proportion of physicians expressing increased satisfaction in 2010 compared to 2020 (48% versus 37%, p = 4.5*10− 5) (Fig. 1; Table 2). The proportions of physicians who believed to a great or very great extent that the clinical areas were chosen appropriately were similar at both time points (76% and 74% in 2010 and 2020, respectively, NS) as was the proportion of physicians who believed that the indicators were defined appropriately (60% and 59% in 2010 and 2020, respectively, NS).

Fig. 1
figure 1

PCP perceptions on the National Program for Quality Indicators in Community Healthcare and the quality indicators collected: comparisoדn between responses in the 2010 and 2020 surveys. Responses were provided on a scale of 1 (to a very small extent) to 6 (to a very large extent). * The difference between the years is statistically significant. The numbers in white squares indicate the percentage of responders who responded “to a very high” and “high” extent

Table 2 The relationship between the main study variables and physician demographics and professional characteristics by survey year

At the same time, one fifth of the respondents (20%) in 2020 recommended modifying some of the specific indicators to a great or very great extent, and this was lower than the proportion in 2010 (23%) (Fig. 1). 7% of respondents mentioned that there are some unnecessary clinical areas included in the program, like asthma, vaccinations, and diabetes, alongside some missing clinical areas such as health promotion, cancer and early detection of cancer, osteoporosis, and the quality of communication with patients. The most mentioned unnecessary and missing clinical areas in the 2020 survey were similar to those mentioned in 2010.

Comparison of the responses provided by family medicine specialists, internal medicine/other specialists and physicians who are not board certified (Table 2), showed that between 2010 and 2020, there was a statistically significant decrease in the proportion of family medicine specialists who perceived that monitoring clinical performance is important, supported the continuation of the program, and reported that the program increased their job satisfaction to a great/very great extent (p < 0.05, p < 0.01 and p < 0.001, respectively). No statistically significant change was observed in these parameters for the other physician subgroups. Notably, in 2020, the proportion of family physicians who reported that the program increased their job satisfaction to a great/very great extent was very low (15%) compared to the other physicians.

Perceived challenges associated with the program

Over half of respondents perceived that the program was associated with increased workload (63% in 2010 and 58% in 2020), over-competition (47% and 48%, respectively) and excess managerial pressure (58% and 60%, respectively), but the differences between the years were not statistically significantly different (Fig. 2).

Fig. 2
figure 2

Challenges related to the National Program for Quality Indicators in Community Healthcare. The bars depict the percentage of PCPs who rated the challenge as “high” or “very high”, by survey year (%). *Note: The differences are not statistically significant

The most common changes suggested by the respondents for handling these challenges included emphasizing outcome rather than process indicators; reducing the number of indicators; providing advanced training, using automatic measuring tools; ending the practice of sharing comparisons between HMOs with the public; ending the practice of sharing comparisons among physicians within HMOs; reducing organizational pressures; and allocating specific time slots for quality measurement.

In both surveys more than half of the respondents perceived to a great/very great extent that HMOs did their best to help them improve their performance in quality indicators (51% in 2010 and 54% in 2020, NS). When asked to identify the main changes they would make in the way that HMOs should learn from the indicators, the most common responses were providing more job positions for healthcare personnel (physicians, nurses, secretaries, health promoters etc.), expanding the visiting time allocated for each patient, increasing the availability of laboratory and other tests (such as magnetic resonance imaging, computed tomography, mammography, etc.) and the availability of specialists. In addition, they suggested expanding the dedicated time allocated for dealing with quality indicators and raising physicians’ salary.

Physicians’ job satisfaction

Many of the physicians who participated in the survey in 2020 indicated that they were generally satisfied or very satisfied with their work (84%, compared to 80% in 2010), 13% were moderately satisfied, and 3% were unsatisfied or very unsatisfied. In 2020, more than a third of respondents (37%) compared to 48% in 2010 reported increased job satisfaction since the monitoring program was implemented (p = 4.5*10− 5). Furthermore, two-thirds of respondents (66% in 2010 and 63% in 2020) felt satisfied or very satisfied with their performance measured by the quality indicators, as compared to the other physicians in their districts. The difference was not statistically significant between the survey years.

Correlates of physicians’ perceptions

Bivariate analysis was performed to examine whether the study variables varied across key subgroups of respondents and across study years. As shown in Table 2, PCPs over 60 years of age, male, Jewish, not board-certified supported the continuation of the program more than younger PCPs, female, non-Jewish, board certified and specialists. Nonetheless, in almost all of these subgroups, support decreased for the program in its current setup between 2010 and 2020.

In almost all strata, respondents to the 2020 survey reported less burden at work compared to the 2010 respondents. This decrease is shown clearly among PCPs who were born abroad, and among independent physicians and those who hold salaried together with independent position. In contrast, the sense of workload increased among PCPs who work as salaried physicians only and among specialists.

Despite a slight decrease between 2010 and 2020, most respondents in all strata supported the continuation of the indicators program.

Covariates of PCP perceptions

Table 3 presents the results of logistic regressions, which assessed the independent effects of a variety of personal and professional characteristics on PCP attitudes toward the monitoring program. This subgroup analysis showed that non-Jewish PCPs versus Jewish PCPs, females versus males, and PCPs without a board certification compared to board certified PCPs were more likely to perceive that the program improves quality and that it is important. Other demographic and personal characteristics had a mixed effect on PCPs’ attitudes towards the program.

Table 3 Logistic regressions of selected outcome variable related to the National program for quality indicators in community healthcare on PCPs’ personal and professional characteristics*

Discussion

Two main findings emerge from the 2020 survey conducted among PCPs to examine their perceptions of the Israel National Program for Quality Indicators in Community Healthcare. First, most respondents think that the program is important, contributes to the quality of medical care and they support its continuation. However, although a high percentage of PCPs supported the program in 2020, the support level was slightly lower than it was in 2010. Second, it seems that the steps taken to mitigate the side effects of the program (e.g., reducing workload, excessive managerial pressure and over competition) had little effect or were offset by other developments. Despite changes made in the program, PCPs’ perceptions of the program’s adverse effects did not change substantially, except for a small but significant decrease in the proportion of PCPs who felt that the quality monitoring program increased their workload to a great extent.

It is possible that the actions of HMO managements were not felt by PCPs. In addition, several changes occurred in the decade between the two surveys that may have offset internal efforts to make the measurement program less onerous. One such change was the publication of inter-HMO comparative data starting in 2012. That change alone may have increased the workload and pressure perceived by PCPs. However, our study has shown that the proportion of PCPs reporting stress and competition did not increase between 2010 and 2020. This observation may be attributed to the HMOs’ actions to reduce the program’s significant challenges which may have offset PCPs’ perceived pressure due to the publicizing of indicators.

In contrast, to physicians without board certification, who perceived greater benefit and support to the quality indicators program, family medicine specialists showed the greatest objections to the program. This group of professionals have trained and passed boards in this specific field of primary care medicine and are the only ones who train students and trainees in the clinics. Therefore, they may perceive the quality indicators program as a threat to their autonomy. We observed similar trends of reduced support for the program, reduced work satisfaction and decreased perception that monitoring contributes to improved quality among the internal medicine and other specialists.

A position paper published by the Israeli Medical Association (IMA) in 2018 pointed to several problems, including problems in correctly measuring essential indicators (i.e., not all essential indicators can be measured accurately), methodological difficulties in standardizing patients’ health and socioeconomic status, creating incentives for patient selection, creating an incentive to provide unnecessary treatments aimed at influencing indicators’ performance rates, allocation of resources and management effort to actions that are measured at the expense of those that are not are measured. In this position paper the IMA recommended to maintain only 9 quality indicators [19]. However, it should be noted that this is a position paper and not supported by empirical data.

Due to the differences among healthcare systems around the world and the type of quality indicators employed in each country [20, 21], it is difficult to compare findings across countries. There have been calls to align quality indicators across organizations and countries [22, 23]. Furthermore, there is a lack of studies on PCPs’ perceptions on measuring quality indicators in health, specifically in primary care.

Whereas in Israel all indicators are retrieved centrally from the computerized electronic medical record, in other countries, doctors report their performance in quality indicators in writing. In a study conducted in the Netherlands among medical specialists, residents and nurses working in intensive care in 8 hospitals, 66% perceived documenting quality indicator data as unnecessary and 18% perceived them as unreasonable. Unnecessary documentation was perceived as reducing the sense of autonomy. Nevertheless, documentation burden had no effect on the perceived joy in work [24]. Documentation requirements for electronic health records, including quality metrics, compliance and billing have also been reported to contribute to stress and burnout among PCPs [25, 26].

As mentioned above, our findings showed that female PCPs (versus male PCPs), non-Jewish PCPs (versus Jewish ones), and those who are not board certified (versus board-certified PCPs) were more likely to support the Program for Quality Indicators in Community Healthcare and its continuation.

Gender differences in clinical decision-making may stem from implicit biases and historical biases in medical education, which can influence how doctors approach quality care​ [27]. This is supported by findings that female doctors often exhibit more empathizing traits, aligning with programs that prioritize patient-centered or person-centered care [27, 28] However, the differences between male and female physicians in relation to overall performance on quality measures have been shown to be minimal in contemporary settings where advanced clinical decision support and feedback systems are in place. In such environments, the adoption of quality indicator programs tends to be similar across genders​ [29].

There is evidence that minority physicians are more supportive of quality improvement programs that directly tackle health disparities, especially those related to preventive screenings and patient safety, where minority patients often experience lower levels of care​ [30]. Minority physicians frequently report that healthcare systems are less responsive to the needs of diverse patient populations, leading to their greater advocacy for quality indicator programs that address these gaps​ [31].

It is possible that the program helps PCPs who are not board certified to follow important guidelines and emphases in treatment, while this expectation may evoke resistance among specialist PCPs due to perceived reduced autonomy. Studies have suggested that the increasing emphasis on quality indicator programs, particularly under value-based care models, has led to concerns among physicians about a loss of clinical decision-making freedom. These programs, often linked to performance metrics and reimbursement systems, are seen by some doctors as reducing their autonomy by imposing standardized care protocols that may not account for individual patient needs [32]. For example, in a study that examined physicians’ perceptions in Canada, the United States and Norway has found significant concerns about the freedom to make clinical decisions. Physicians working in the United States particularly reported higher levels of perceived autonomy compared to their Canadian and Norwegian counterparts. However, many doctors across these countries felt that these programs limited their ability to spend adequate time with patients and exercise clinical freedom, impacting their job satisfaction and perceptions of quality care​ [33]. Moreover, some researchers argue that while quality indicators can improve patient outcomes, the way they are implemented often reduces physicians’ sense of professional autonomy, potentially leading to burnout and dissatisfaction [34].

Limitations

Although some of the demographic parameters collected in 2020 were different from those of the 2010 survey (age, country of birth, specialty, type of employment and main type of practice), the analysis was adjusted for these parameters so that the natural differences in populations between the survey timepoints would not affect the results. The survey was done among a random sample of PCPs working in each of the four public HMOs and the results were weighted for the size of the HMOs; therefore, its findings represent the perceptions of Israeli PCPs working in Israel’s public health system. However, a selection bias may be present because the HMOs provided us only with contact details of PCPs and not their characteristics; therefore, it was not possible to compare the attributes of respondents to those who declined to respond. It is not possible to estimate the bias of whether respondents who were more supportive of the program or opposed the program were more inclined to answer the survey. Additionally, in the 2010 survey there were twice as many family medicine specialists compared to internal medicine/other specialists (40.6% versus 20.2%), whereas in 2020 their percentage were almost equal (35.5% versus 29.6%). However, analysis of the responses by professional groups showed similar trends regarding their opinions about the program and their support for the program. As some HMOs added additional internal indicators, the burden perceived by PCPs working in each HMO may be different. To the best of our knowledge, the physicians are not aware which indicators are National Program ones, and which are internal.

The two surveys were conducted 10 years apart, and although the questions were identical, there was a difference in the method for completing the questionnaires: In the 2010 survey, most questionnaires were completed by regular mail, fax or phone and in 2020 most questionnaires were completed online. These different means for survey completion may have led to a social desirability bias among respondents, despite the assurance of anonymity. Last, the study could have been subject to a response bias, which could not have been addressed.

Conclusions and policy implications

The study indicates that despite a slight decrease, support for the National Program for Quality Indicators in Community Healthcare remains relatively high among PCPs, and most PCPs recognize its importance in improving the quality of patient care. Nevertheless, similar to the findings in 2010, the program seems to increase PCP workload and reduce work satisfaction.

Therefore, it should be evaluated whether the program adversely impacts PCPs’ workload and whether increased workload due to the program affects patient care. Measuring PCPs’ attitudes towards the program by routine surveys may foster greater engagement and satisfaction and help us to understand which changes should be undertaken.

Ultimately, there is an opportunity for program leaders and HMOs to engage in a broader dialogue with PCPs to refine the program’s design and implementation. The cooperative relationship between HMOs and PCPs in Israel offers a strong foundation for constructive discussions that could improve the program and align it with the needs of clinics and physicians, while also addressing broader national health objectives.

Data availability

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Abbreviations

HMO:

Health maintenance organization

NS:

Not statistically significant

PCP:

Primary care physician

OECD:

Organisation for economic co-operation and development

References

  1. Australia’s. Health 2018. Australia’s health series no. 16. AUS 221. 2018 Canberra: AIHW: Australian Institute of Health and Welfare; 2018. https://www.aihw.gov.au/getmedia/7c42913d-295f-4bc9-9c24-4e44eff4a04a/aihw-aus-221.pdf

  2. Crampton P, Perera R, Crengle S, Dowell A, Howden-Chapman P, Kearns R, et al. What makes a good performance indicator? Devising primary care performance indicators for new Zealand. N Z Med J. 2004;117:U820.

    PubMed  Google Scholar 

  3. Arah OA, Westert GP, Hurst J, Klazinga NS. A conceptual framework for the OECD health care quality indicators project. Int J Qual Health Care. 2006;18(Suppl 1):5–13.

    PubMed  Google Scholar 

  4. Carinci F, Van Gool K, Mainz J, Veillard J, Pichora EC, Januel JM, et al. Towards actionable international comparisons of health system performance: expert revision of the OECD framework and quality indicators. Int J Qual Health Care. 2015;27:137–46.

    CAS  PubMed  Google Scholar 

  5. Blum N, Halperin D, Masharawi Y. Ambulatory and Hospital-based quality improvement methods in Israel. Health Serv Insights. 2014;7:25–30.

    PubMed  PubMed Central  Google Scholar 

  6. Landon BE. Physicians’ views of performance reports: grading the graders. Isr J Health Policy Res. 2012;1:27.

    PubMed  PubMed Central  Google Scholar 

  7. Rosen B, Pawlson LG, Nissenholtz R, Benbassat J, Porath A, Chassin MR, et al. What the united States could learn from Israel about improving the quality of health care. Health Aff (Millwood). 2011;30:764–72.

    PubMed  Google Scholar 

  8. Rosen B, Porath A, Pawlson LG, Chassin MR, Benbassat J. Adherence to standards of care by health maintenance organizations in Israel and the USA. Int J Qual Health Care. 2011;23:15–25.

    PubMed  Google Scholar 

  9. Rosen B, Porath A, Pawlson LG, Chassin MR, Benbassat J. Systems for monitoring quality of community healthcare in Israeli health plans and US managed-care organizations. 2011 Jerusalem: smikler center for health policy research. Mayers JDC Brookdale Institute; 2011. https://brookdale-web.s3.amazonaws.com/uploads/2018/01/576-11-Healthcare-Monitoring-REP-ENG.pdf.

  10. Podell R, Kaufman-Shriqui V, Sagy YW, Manor O, Ben-Yehuda A. The quality of primary care provided to the elderly in Israel. Isr J Health Policy Res. 2018;7:21.

    PubMed  PubMed Central  Google Scholar 

  11. Wilf-Miron R, Bolotin A, Gordon N, Porath A, Peled R. The association between improved quality diabetes indicators, health outcomes and costs: towards constructing a business case for quality of diabetes care–a time series study. BMC Endocr Disord. 2014;14:92.

    PubMed  PubMed Central  Google Scholar 

  12. Weisband YL, Torres L, Paltiel O, Sagy YW, Calderon-Margalit R, Manor O. Socioeconomic disparity trends in cancer screening among women after introduction of National quality indicators. Ann Fam Med. 2021;19:396–404.

    PubMed  PubMed Central  Google Scholar 

  13. Abdel-Rahman N, Calderon-Margalit R, Cohen A, Elran E, Golan Cohen A, Krieger M, et al. Longitudinal adherence to diabetes quality indicators and cardiac disease: A nationwide Population-Based historical cohort study of patients with Pharmacologically treated diabetes. J Am Heart Assoc. 2022;11:e025603.

    PubMed  PubMed Central  Google Scholar 

  14. Levi B, Borow M, Glekin M. Participation of National medical associations in quality improvement activities - International comparison and the Israeli case. Isr J Health Policy Res. 2014;3:14.

    PubMed  PubMed Central  Google Scholar 

  15. Levi B, Zehavi A, Chinitz D. Taking the measure of the profession: physician associations in the measurement age. Health Policy. 2018;122:746–54.

    PubMed  Google Scholar 

  16. Rosen B, Nissanholtz-Gannot R. From information about quality - to improving quality. Interim report: summary and analysis of interviews with managers in the health funds. 2010 Jerusalem: Smokler Center for Health Policy Research. Myers-JDC-Brookdale Institute 2010. https://brookdale-web.s3.amazonaws.com/uploads/2018/01/562-10-QualityInd-REP-HEB.pdf

  17. Nissanholtz-Gannot R, Rosen B, The Quality Monitoring Study G. Monitoring quality in Israeli primary care: the primary care physicians’ perspective. Isr J Health Policy Res. 2012;1:26.

    PubMed  PubMed Central  Google Scholar 

  18. Norton EC, Dowd BE, Maciejewski ML. Odds ratios—current best practice and use. JAMA. 2018;320(1):84–5.

    PubMed  Google Scholar 

  19. Biderman A, Tabenkin SV, Lavon H, Levin E, Nof Sadeh A. E. Quality indices for community medicine in Israel: position paper. 2018: Israeli Medical Association; 2018. https://www.ima.org.il/userfiles/image/Ne106_madadeyEichut.pdf

  20. Improving healthcare quality in Europe: Characteristics, effectiveness and implementation of different strategies 2019 Copenhagen (Denmark): European Observatory on Health Systems and Policies. 2019. https://www.ncbi.nlm.nih.gov/books/NBK549260/box/Ch3-b2/?report=objectonly

  21. Jamieson Gilmore K, Corazza I, Coletta L, Allin S. The uses of patient reported experience measures in health systems: A systematic narrative review. Health Policy. 2023;128:1–10.

    PubMed  Google Scholar 

  22. Jacobs DB, Schreiber M, Seshamani M, Tsai D, Fowler E, Fleisher LA. Aligning quality measures across CMS - The universal foundation. N Engl J Med. 2023;388:776–79.

    PubMed  Google Scholar 

  23. OECD. Recommendations to OECD ministers from the high level reflection group on the future of health statistics. Strengthening the international comparison of health system performance through patient-reported indicators. 2017: Organization for Economic Cooperation and Development; 2017. https://www.oecd.org/health/Recommendations-from-high-level-reflection-group-on-the-future-of-health-statistics.pdf

  24. Hesselink G, Verhage R, Hoiting O, Verweij E, Janssen I, Westerhof B, et al. Time spent on documenting quality indicator data and associations between the perceived burden of documenting these data and joy in work among professionals in intensive care units in the Netherlands: a multicentre cross-sectional survey. BMJ Open. 2023;13:e062939.

    PubMed  PubMed Central  Google Scholar 

  25. Budd J. Burnout related to electronic health record use in primary care. J Prim Care Community Health. 2023;14:21501319231166921.

    PubMed  PubMed Central  Google Scholar 

  26. DiGiorgio AM, Ehrenfeld JM, Miller BJ. Improving health care quality measurement to combat clinician burnout. JAMA. 2023;330:1135–36.

    PubMed  Google Scholar 

  27. Champagne-Langabeer T, Hedges AL. Physician gender as a source of implicit bias affecting clinical decision-making processes: a scoping review. BMC Med Educ. 2021;21:171.

    PubMed  PubMed Central  Google Scholar 

  28. Lim SA, Khorrami A, Wassersug RJ, Agapoff JA. Gender differences among healthcare providers in the promotion of Patient-, Person- and Family-Centered Care—And its implications for providing quality healthcare. Healthcare. 2023;11:565.

    PubMed  PubMed Central  Google Scholar 

  29. Jackson JL, Farkas A, Scholcoff C. Does provider gender affect the quality of primary care?? J Gen Intern Med. 2020;35:2094–98.

    PubMed  PubMed Central  Google Scholar 

  30. Beach MC, Gary TL, Price EG, Robinson K, Gozu A, Palacio A, et al. Improving health care quality for Racial/ethnic minorities: a systematic review of the best evidence regarding provider and organization interventions. BMC Public Health. 2006;6:104.

    PubMed  PubMed Central  Google Scholar 

  31. Jindal M, Chaiyachati KH, Fung V, Manson SM, Mortensen K. Eliminating health care inequities through strengthening access to care. Health Serv Res. 2023;58(Suppl 3):300–10.

    PubMed  PubMed Central  Google Scholar 

  32. Waddimba AC, Mohr DC, Beckman HB, Meterko MM. Physicians’ perceptions of autonomy support during transition to value-based reimbursement: A multi-center psychometric evaluation of six-item and three-item measures. PLoS ONE. 2020;15:e0230907.

    CAS  PubMed  PubMed Central  Google Scholar 

  33. Tyssen R, Palmer KS, Solberg IB, Voltmer E, Frank E. Physicians’ perceptions of quality of care, professional autonomy, and job satisfaction in Canada, Norway, and the united States. BMC Health Serv Res. 2013;13:516.

    PubMed  PubMed Central  Google Scholar 

  34. Emanuel EJ, Pearson SD. Physician autonomy and health care reform. JAMA. 2012;307:367–68.

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the participants who agreed to answer the survey, and Tamar Medina Artom for contributing to data collection and analysis.

Funding

This work was supported by a grant from the Israel National Institute for Health Policy Research (ממ– 323–2018).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, R.N.G. and B.R.; methodology, R.N.G. and B.R.; validation, R.N.G.; formal analysis, R.N.G. and A.B.; investigation, R.N.G. and A.B.; data curation, A.B.; writing R.N.G., and A.B.; writing review and editing, R.N.G. and B.R.; supervision, R.N.G and B.R. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Rachel Nissanholtz-Gannot.

Ethics declarations

Ethics approval and consent to participate

We confirm that all participants provided written informed consent to participate in the study, and no participants below 18 years old were included. The study was conducted in accordance with ethical principles and guidelines and obtained the necessary approval from an institutional ethics committee prior to the initiation of the research. The research was approved by the Ethics Committee of Brookdale institute for studies involving humans.

Consent for publication

Informed consent was obtained from all subjects involved in the study.

Conflict of interest

The authors declare no conflict of interest. B.R. serves as an editor for IJHPR.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nissanholtz-Gannot, R., Burger, A. & Rosen, B. Primary care physicians’ perceptions of Israel’s national program for quality indicators in community healthcare– 2010 and 2020. Isr J Health Policy Res 14, 21 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13584-025-00685-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13584-025-00685-5

Keywords