20 resultados para probability of occurrence


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Macroalgae are the main primary producers of the temperate rocky shores providing a three-dimensional habitat, food and nursery grounds for many other species. During the past decades, the state of the coastal waters has deteriorated due to increasing human pressures, resulting in dramatic changes in coastal ecosystems, including macroalgal communities. To reverse the deterioration of the European seas, the EU has adopted the Water Framework Directive (WFD) and the Marine Strategy Framework Directive (MSFD), aiming at improved status of the coastal waters and the marine environment. Further, the Habitats Directive (HD) calls for the protection of important habitats and species (many of which are marine) and the Maritime Spatial Planning Directive for sustainability in the use of resources and human activities at sea and by the coasts. To efficiently protect important marine habitats and communities, we need knowledge on their spatial distribution. Ecological knowledge is also needed to assess the status of the marine areas by involving biological indicators, as required by the WFD and the MSFD; knowledge on how biota changes with human-induced pressures is essential, but to reliably assess change, we need also to know how biotic communities vary over natural environmental gradients. This is especially important in sea areas such as the Baltic Sea, where the natural environmental gradients create substantial differences in biota between areas. In this thesis, I studied the variation occurring in macroalgal communities across the environmental gradients of the northern Baltic Sea, including eutrophication induced changes. The aim was to produce knowledge to support the reliable use of macroalgae as indicators of ecological status of the marine areas and to test practical metrics that could potentially be used in status assessments. Further, the aim was to develop a methodology for mapping the HD Annex I habitat reefs, using the best available data on geology and bathymetry. The results showed that the large-scale variation in the macroalgal community composition of the northern Baltic Sea is largely driven by salinity and exposure. Exposure is important also on smaller spatial scales, affecting species occurrence, community structure and depth penetration of algae. Consequently, the natural variability complicates the use of macroalgae as indicators of human-induced changes. Of the studied indicators, the number of perennial algal species, the perennial cover, the fraction of annual algae, and the lower limit of occurrence of red and brown perennial algae showed potential as usable indicators of ecological status. However, the cumulated cover of algae, commonly used as an indicator in the fully marine environments, showed low responses to eutrophication in the area. Although the mere occurrence of perennial algae did not show clear indicator potential, a distinct discrepancy in the occurrence of bladderwrack, Fucus vesiculosus, was found between two areas with differing eutrophication history, the Bothnian Sea and the Archipelago Sea. The absence of Fucus from many potential sites in the outer Archipelago Sea is likely due to its inability to recover from its disappearance from the area 30-40 years ago, highlighting the importance of past events in macroalgal occurrence. The methodology presented for mapping the potential distribution and the ecological value of reefs showed, that relatively high accuracy in mapping can be achieved by combining existing available data, and the maps produced serve as valuable background information for more detailed surveys. Taken together, the results of the theses contribute significantly to the knowledge on macroalgal communities of the northern Baltic Sea that can be directly applied in various management contexts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This master’s thesis studies the probability of bankruptcy of Finnish limited liability companies as a part of credit risk assessment. The main idea of this thesis is to build and test bankruptcy prediction models for Finnish limited liability companies that can be utilized in credit decision making. The data used in this thesis consists of historical financial statements from 2112 Finnish limited liability companies, half of which have filed for bankruptcy. A total of four models are developed, two with logistic regression and two with multivariate discriminant analysis (MDA). The time horizon of the models varies from 1 to 2 years prior to the bankruptcy, and 14 different financial variables are used in the model formation. The results show that the prediction accuracy of the models ranges between 81.7% and 88.9%, and the best prediction accuracy is achieved with the one year prior the bankruptcy logistic regression model. However the difference between the best logistic model and the best MDA model is minimal. Overall based on the results of this thesis it can be concluded that predicting bankruptcy is possible to some extent, but naturally the results are not perfect.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several companies are trying to improve their operation efficiency by implementing an enterprise resource planning (ERP) system that makes it possible to control the resources of the company in real time. However, the success of the implementation project is not a foregone conclusion; a significant part of these projects end in a failure, one way or another. Therefore it is important to investigate ERP system implementation more closely in order to increase understanding about factors influencing ERP system success and to improve the probability of a successful ERP implementation project. Consequently, this study was initiated because a manufacturing case company wanted to review the success of their ERP implementation project. To be exact, the case company hoped to gain both information about the success of the project and insight for future implementation improvement. This study investigated ERP success specifically by examining factors that influence ERP key-user satisfaction. User satisfaction is one of the most commonly applied indicators of information system success. The research data was mainly collected by conducting theme interviews. The subjects of the interviews were six key-users of the newly implemented ERP system. The interviewees were closely involved in the implementation project. Furthermore, they act as representative users that utilize the new system in everyday business processes. The collected data was analyzed by thematizing. Both data collection and analysis were guided by a theoretical frame of reference. This frame was based on previous research on the subject. The results of the study aligned with the theoretical framework to large extent. The four principal factors influencing key-user satisfaction were change management, contractor service, key-user’s system knowledge and characteristics of the ERP product itself. One of the most significant contributions of the research is that it confirmed the existence of a connection between change management and ERP key-user satisfaction. Furthermore, it discovered two new sub-factors influencing contractor service related key-user satisfaction. In addition, the research findings indicated that in order to improve the current level of key-user satisfaction, the case company should pay special attention to system functionality improvement and enhancement of the key-users’ knowledge. During similar implementation projects in the future, it would be important to assure the success of change management and contractor service related processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Our goal is to get better understanding of different kind of dependencies behind the high-level capability areas. The models are suitable for investigating present state capabilities or future developments of capabilities in the context of technology forecasting. Three levels are necessary for a model describing effects of technologies on military capabilities. These levels are capability areas, systems and technologies. The contribution of this paper is to present one possible model for interdependencies between technologies. Modelling interdependencies between technologies is the last building block in constructing a quantitative model for technological forecasting including necessary levels of abstraction. This study supplements our previous research and as a result we present a model for the whole process of capability modelling. As in our earlier studies, capability is defined as the probability of a successful task or operation or proper functioning of a system. In order to obtain numerical data to demonstrate our model, we conducted a questionnaire to a group of defence technology researchers where interdependencies between seven representative technologies were inquired. Because of a small number of participants in questionnaires and general uncertainties concerning subjective evaluations, only rough conclusions can be made from the numerical results

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.