33 resultados para Current efficiency
em Helda - Digital Repository of University of Helsinki
Resumo:
Nitrogen (N) is one of the main inputs in cereal cultivation and as more than half of the arable land in Finland is used for cereal production, N has contributed substantially to agricultural pollution through fertilizer leaching and runoff. Based on this global phenomenon, the European Community has launched several directives to reduce agricultural emissions to the environment. Trough such measures, and by using economic incentives, it is expected that northern European agricultural practices will, in the future, include reduced N fertilizer application rates. Reduced use of N fertilizer is likely to decrease both production costs and pollution, but could also result in reduced yields and quality if crops experience temporary N deficiency. Therefore, more efficient N use in cereal production, to minimize pollution risks and maximize farmer income, represents a current challenge for agronomic research in the northern growing areas. The main objective of this study was to determine the differences in nitrogen use efficiency (NUE) among spring cereals grown in Finland. Additional aims were to characterize the multiple roles of NUE by analysing the extent of variation in NUE and its component traits among different cultivars, and to understand how other physiological traits, especially radiation use efficiency (RUE) and light interception, affect and interact with the main components of NUE and contribute to differences among cultivars. This study included cultivars of barley (Hordeum vulgare L.), oat (Avena sativa L.) and wheat (Triticum aestivum L.). Field experiments were conducted between 2001 and 2004 at Jokioinen, in Finland. To determine differences in NUE among cultivars and gauge the achievements of plant breeding in NUE, 17-18 cultivars of each of the three cereal species released between 1909 and 2002 were studied. Responses to nitrogen of landraces, old cultivars and modern cultivars of each cereal species were evaluated under two N regimes (0 and 90 kg N ha-1). Results of the study revealed that modern wheat, oat and barley cultivars had similar NUE values under Finnish growing conditions and only results from a wider range of cultivars indicated that wheat cultivars could have lower NUE than the other species. There was a clear relationship between nitrogen uptake efficiency (UPE) and NUE in all species whereas nitrogen utilization efficiency (UTE) had a strong positive relationship with NUE only for oat. UTE was clearly lower in wheat than in other species. Other traits related to N translocation indicated that wheat also had a lower harvest index, nitrogen harvest index and nitrogen remobilisation efficiency and therefore its N translocation efficiency was confirmed to be very low. On the basis of these results there appears to be potential and also a need for improvement in NUE. These results may help understand the underlying physiological differences in NUE and could help to identify alternative production options, such as the different roles that species can play in crop rotations designed to meet the demands of modern agricultural practices.
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
Modern drug discovery gives rise to a great number of potential new therapeutic agents, but in some cases the efficient treatment of patient may not be achieved because the delivery of active compounds to the target site is insufficient. Thus, drug delivery is one of the major challenges in current pharmaceutical research. Numerous nanoparticle-based drug carriers, e.g. liposomes, have been developed for enhanced drug delivery and targeting. Drug targeting may enhance the efficiency of the treatment and, importantly, reduce unwanted side effects by decreasing drug distribution to non-target tissues. Liposomes are biocompatible lipid-based carriers that have been studied for drug delivery during the last 40 years. They can be functionalized with targeting ligands and sensing materials for triggered activation. In this study, various external signal-assisted liposomal delivery systems were developed. Signals can be used to modulate drug permeation or release from the liposome formulation, and they provide accurate control of time, place and rate of activation. The study involved three types of signals that were used to trigger drug permeation and release: electricity, heat and light. Electrical stimulus was utilized to enhance the permeation of liposomal DNA across the skin. Liposome/DNA complex-mediated transfections were performed in tight rat epidermal cell model. Various transfection media and current intensities were tested, and transfection efficiency was evaluated non-invasively by monitoring the concentration of secreted reporter protein in cell culture medium. Liposome/DNA complexes produced gene expression, but electrical stimulus did not enhance the transfection efficiency significantly. Heat-sensitive liposomal drug delivery system was developed by coating liposomes with biodegradable and thermosensitive poly(N-(2-hydroxypropyl) methacrylamide-mono/dilactate polymer. Temperature-triggered liposome aggregation and contents release from liposomes were evaluated. The cloud point temperature (CP) of the polymer was set to 42 °C. Polymer-coated liposome aggregation and contents release were observed above CP of the polymer, while non-coated liposomes remained intact. Polymer precipitates above its CP and interacts with liposomal bilayers. It is likely that this induces permeabilization of the liposomal membrane and contents release. Light-sensitivity was introduced to liposomes by incorporation of small (< 5 nm) gold nanoparticles. Hydrophobic and hydrophilic gold nanoparticles were embedded in thermosensitive liposomes, and contents release was investigated upon UV light exposure. UV light-induced lipid phase transitions were examined with small angle X-ray scattering, and light-triggered contents release was shown also in human retinal pigment epithelial cell line. Gold nanoparticles absorb light energy and transfer it into heat, which induces phase transitions in liposomes and triggers the contents release. In conclusion, external signal-activated liposomes offer an advanced platform for numerous applications in drug delivery, particularly in the localized drug delivery. Drug release may be localized to the target site with triggering stimulus that results in better therapeutic response and less adverse effects. Triggering signal and mechanism of activation can be selected according to a specific application.
Resumo:
Ei saatavilla
Resumo:
Objective: Attention deficit hyperactivity disorder (ADHD) is a life-long condition, but because of its historical status as a self-remitting disorder of childhood, empirically validated and reliable methods for the assessment of adults are scarce. In this study, the validity and reliability of the Wender Utah Rating Scale (WURS) and the Adult Problem Questionnaire (APQ), which survey childhood and current symptoms of ADHD, respectively, were studied in a Finnish sample. Methods: The self-rating scales were administered to adults with an ADHD diagnosis (n = 38), healthy control participants (n = 41), and adults diagnosed with dyslexia (n = 37). Items of the self-rating scales were subjected to factor analyses, after which the reliability and discriminatory power of the subscales, derived from the factors, were examined. The effects of group and gender on the subscales of both rating scales were studied. Additionally, the effect of age on the subscales of the WURS was investigated. Finally, the diagnostic accuracy of the total scores was studied. Results: On the basis of the factor analyses, a four-factor structure for the WURS and five-factor structure for the APQ had the best fit to the data. All of the subscales of the APQ and three of the WURS achieved sufficient reliability. The ADHD group had the highest scores on all of the subscales of the APQ, whereas two of the subscales of the WURS did not statistically differ between the ADHD and the Dyslexia group. None of the subscales of the WURS or the APQ was associated with the participant's gender. However, one subscale of the WURS describing dysthymia was positively correlated with the participant's age. With the WURS, the probability of a correct positive classification was .59 in the current sample and .21 when the relatively low prevalence of adult ADHD was taken into account. The probabilities of correct positive classifications with the APQ were .71 and .23, respectively. Conclusions: The WURS and the APQ can provide accurate and reliable information of childhood and adult ADHD symptoms, given some important constraints. Classifications made on the basis of the total scores are reliable predictors of ADHD diagnosis only in populations with a high proportion of ADHD and a low proportion of other similar disorders. The subscale scores can provide detailed information of an individual's symptoms if the characteristics and limitations of each domain are taken into account. Improvements are suggested for two subscales of the WURS.
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
The purpose of this study was to extend understanding of how large firms pursuing sustained and profitable growth manage organisational renewal. A multiple-case study was conducted in 27 North American and European wood-industry companies, of which 11 were chosen for closer study. The study combined the organisational-capabilities approach to strategic management with corporate-entrepreneurship thinking. It charted the further development of an identification and classification system for capabilities comprising three dimensions: (i) the dynamism between firm-specific and industry-significant capabilities, (ii) hierarchies of capabilities and capability portfolios, and (iii) their internal structure. Capability building was analysed in the context of the organisational design, the technological systems and the type of resource-bundling process (creating new vs. entrenching existing capabilities). The thesis describes the current capability portfolios and the organisational changes in the case companies. It also clarifies the mechanisms through which companies can influence the balance between knowledge search and the efficiency of knowledge transfer and integration in their daily business activities, and consequently the diversity of their capability portfolio and the breadth and novelty of their product/service range. The largest wood-industry companies of today must develop a seemingly dual strategic focus: they have to combine leading-edge, innovative solutions with cost-efficient, large-scale production. The use of modern technology in production was no longer a primary source of competitiveness in the case companies, but rather belonged to the portfolio of basic capabilities. Knowledge and information management had become an industry imperative, on a par with cost effectiveness. Yet, during the period of this research, the case companies were better in supporting growth in volume of the existing activity than growth through new economic activities. Customer-driven, incremental innovation was preferred over firm-driven innovation through experimentation. The three main constraints on organisational renewal were the lack of slack resources, the aim for lean, centralised designs, and the inward-bound communication climate.
Resumo:
The purpose of this study was to evaluate intensity, productivity and efficiency in agriculture in Finland and show implications for N and P fertiliser management. Environmental concerns relating to agricultural production have been and still are focused on arguments about policies that affect agriculture. These policies constrain production while demand for agricultural products such as food, fibre and energy continuously increase. Therefore the importance of increasing productivity is a great challenge to agriculture. Over the last decades producers have experienced several large changes in the production environment such as the policy reform when Finland joined the EU 1995. Other and market changes occurred with the further EU enlargement with neighbouring countries in 2005 and with the decoupling of supports over the 2006-2007 period. Decreasing prices a decreased number of farmers and decreased profitability in agricultural production have resulted from these changes and constraints and of technological development. It is known that the accession to the EU 1995 would herald changes in agriculture. Especially of interest was how the sudden changes in prices of commodities on especially those of cereals, decreased by 60%, would influence agricultural production. The knowledge of properties of the production function increased in importance as a consequence of price changes. A research on the economic instruments to regulate productions was carried out and combined with earlier studies in paper V. In paper I the objective was to compare two different technologies, the conventional farming and the organic farming, determine differences in productivity and technical efficiency. In addition input specific or environmental efficiencies were analysed. The heterogeneity of agricultural soils and its implications were analysed in article II. In study III the determinants of technical inefficiency were analysed. The aspects and possible effects of the instability in policies due to a partial decoupling of production factors and products were studied in paper IV. Consequently connection between technical efficiency based on the turnover and the sales return was analysed in this study. Simple economic instruments such as fertiliser taxes have a direct effect on fertiliser consumption and indirectly increase the value of organic fertilisers. However, fertiliser taxes, do not fully address the N and P management problems adequately and are therefore not suitable for nutrient management improvements in general. Productivity of organic farms is lower on average than conventional farms and the difference increases when looking at selling returns only. The organic sector needs more research and development on productivity. Livestock density in organic farming increases productivity, however, there is an upper limit to livestock densities on organic farms and therefore nutrient on organic farms are also limited. Soil factors affects phosphorous and nitrogen efficiency. Soils like sand and silt have lower input specific overall efficiency for nutrients N and P. Special attention is needed for the management on these soils. Clay soils and soils with moderate clay content have higher efficiency. Soil heterogeneity is cause for an unavoidable inefficiency in agriculture.
Resumo:
This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.
Resumo:
In Finland one of the most important current issues in the environmental management is the quality of surface waters. The increasing social importance of lakes and water systems has generated wide-ranging interest in lake restoration and management, concerning especially lakes suffering from eutrophication, but also from other environmental impacts. Most of the factors deteriorating the water quality in Finnish lakes are connected to human activities. Especially since the 1940's, the intensified farming practices and conduction of sewage waters from scattered settlements, cottages and industry have affected the lakes, which simultaneously have developed in to recreational areas for a growing number of people. Therefore, this study was focused on small lakes, which are human impacted, located close to settlement areas and have a significant value for local population. The aim of this thesis was to obtain information from lake sediment records for on-going lake restoration activities and to prove that a well planned, properly focused lake sediment study is an essential part of the work related to evaluation, target consideration and restoration of Finnish lakes. Altogether 11 lakes were studied. The study of Lake Kaljasjärvi was related to the gradual eutrophication of the lake. In lakes Ormajärvi, Suolijärvi, Lehee, Pyhäjärvi and Iso-Roine the main focus was on sediment mapping, as well as on the long term changes of the sedimentation, which were compared to Lake Pääjärvi. In Lake Hormajärvi the role of different kind of sedimentation environments in the eutrophication development of the lake's two basins were compared. Lake Orijärvi has not been eutrophied, but the ore exploitation and related acid main drainage from the catchment area have influenced the lake drastically and the changes caused by metal load were investigated. The twin lakes Etujärvi and Takajärvi are slightly eutrophied, but also suffer problems associated with the erosion of the substantial peat accumulations covering the fringe areas of the lakes. These peat accumulations are related to Holocene water level changes, which were investigated. The methods used were chosen case-specifically for each lake. In general, acoustic soundings of the lakes, detailed description of the nature of the sediment and determinations of the physical properties of the sediment, such as water content, loss on ignition and magnetic susceptibility were used, as was grain size analysis. A wide set of chemical analyses was also used. Diatom and chrysophycean cyst analyses were applied, and the diatom inferred total phosphorus content was reconstructed. The results of these studies prove, that the ideal lake sediment study, as a part of a lake management project, should be two-phased. In the first phase, thoroughgoing mapping of sedimentation patterns should be carried out by soundings and adequate corings. The actual sampling, based on the preliminary results, must include at least one long core from the main sedimentation basin for the determining the natural background state of the lake. The recent, artificially impacted development of the lake can then be determined by short-core and surface sediment studies. The sampling must be focused on the basis of the sediment mapping again, and it should represent all different sedimentation environments and bottom dynamic zones, considering the inlets and outlets, as well as the effects of possible point loaders of the lake. In practice, the budget of the lake management projects of is usually limited and only the most essential work and analyses can be carried out. The set of chemical and biological analyses and dating methods must therefore been thoroughly considered and adapted to the specific management problem. The results show also, that information obtained from a properly performed sediment study enhances the planning of the restoration, makes possible to define the target of the remediation activities and improves the cost-efficiency of the project.
Resumo:
Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
For the past twenty years, several indicator sets have been produced on international, national and regional levels. Most of the work has concentrated on the selection of the indicators and on collection of the pertinent data, but less attention has been given to the actual users and their needs. This dissertation focuses on the use of sustainable development indicator sets. The dissertation explores the reasons that have deterred the use of the indicators, discusses the role of sustainable development indicators in a policy-cycle and broadens the view of use by recognising three different types of use. The work presents two indicator development processes: The Finnish national sustainable development indicators and the socio-cultural indicators supporting the measurement of eco-efficiency in the Kymenlaakso Region. The sets are compared by using a framework created in this work to describe indicator process quality. It includes five principles supported by more specific criteria. The principles are high policy relevance, sound indicator quality, efficient participation, effective dissemination and long-term institutionalisation. The framework provided a way to identify the key obstacles for use. The two immediate problems with current indicator sets are that the users are unaware of them and the indicators are often unsuitable to their needs. The reasons for these major flaws are irrelevance of the indicators to the policy needs, technical shortcomings in the context and presentation, failure to engage the users in the development process, non-existent dissemination strategies and lack of institutionalisation to promote and update the indicators. The importance of the different obstacles differs among the users and use types. In addition to the indicator projects, materials used in the dissertation include 38 interviews of high-level policy-makers or civil servants close to them, statistics of the national indicator Internet-page downloads, citations of the national indicator publication, and the media coverage of both indicator sets. According to the results, the most likely use for a sustainable development indicator set by policy-makers is to learn about the concept. Very little evidence of direct use to support decision-making was available. Conceptual use is also common for other user groups, namely the media, civil servants, researchers, students and teachers. Decision-makers themselves consider the most obvious use for the indicators to be the promotion of their own views which is a form of legitimising use. The sustainable development indicators have different types of use in the policy cycle and most commonly expected instrumental use is not very likely or even desirable at all stages. Stages of persuading the public and the decision-makers about new problems as well as in formulating new policies employ legitimising use. Learning by conceptual use is also inherent to policy-making as people involved learn about the new situation. Instrumental use is most likely in policy formulation, implementation and evaluation. The dissertation is an article dissertation, including five papers that are published in scientific journals and an extensive introductory chapter that discusses and weaves together the papers.