861 resultados para Multi Domain Information Model
Resumo:
This paper researches the information security value in e-entrepreneurship by revising the literature that establishes the entrepreneurial domain and by relating it with the development of technological resources that create value for the customer in an online business. It details multiple paradigms regarding consumer’s values of information security, while relating them with common practices and previous researches in technological entrepreneurship. This research presents and discusses the benefits of information security standards in e-entrepreneurship. It details and discusses the ISO 27001 and PCI-DSS information security standards that can be used to differentiate security initiatives to achieve competitive advantage, while preserving information leadership as a critical resource for online business success. Based on the literature review, a theoretical research model is presented and research hypotheses are discussed. This model believes that information security affects information leadership and that information leadership, as a unique resource in e-business, contributes to e-entrepreneurship success. The adoption of information security standards affects customer’s trust in e-business, which also benefits e-entrepreneurial strategy.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Assessment processes are essential to guarantee quality and continuous improvement of software in healthcare, as they measure software attributes in their lifecycle, verify the degree of alignment between the software and its objectives and identify unpredicted events. This article analyses the use of an assessment model based on software metrics for three healthcare information systems from a public hospital that provides secondary and tertiary care in the region of Ribeirão Preto. Compliance with the metrics was investigated using questionnaires in guided interviews of the system analysts responsible for the applications. The outcomes indicate that most of the procedures specified in the model can be adopted to assess the systems that serves the organization, particularly in the attributes of compatibility, reliability, safety, portability and usability.
Resumo:
The usage of multi material structures in industry, especially in the automotive industry are increasing. To overcome the difficulties in joining these structures, adhesives have several benefits over traditional joining methods. Therefore, accurate simulations of the entire process of fracture including the adhesive layer is crucial. In this paper, material parameters of a previously developed meso mechanical finite element (FE) model of a thin adhesive layer are optimized using the Strength Pareto Evolutionary Algorithm (SPEA2). Objective functions are defined as the error between experimental data and simulation data. The experimental data is provided by previously performed experiments where an adhesive layer was loaded in monotonically increasing peel and shear. Two objective functions are dependent on 9 model parameters (decision variables) in total and are evaluated by running two FEsimulations, one is loading the adhesive layer in peel and the other in shear. The original study converted the two objective functions into one function that resulted in one optimal solution. In this study, however, a Pareto frontis obtained by employing the SPEA2 algorithm. Thus, more insight into the material model, objective functions, optimal solutions and decision space is acquired using the Pareto front. We compare the results and show good agreement with the experimental data.
Resumo:
On most if not all evaluatively relevant dimensions such as the temperature level, taste intensity, and nutritional value of a meal, one range of adequate, positive states is framed by two ranges of inadequate, negative states, namely too much and too little. This distribution of positive and negative states in the information ecology results in a higher similarity of positive objects, people, and events to other positive stimuli as compared to the similarity of negative stimuli to other negative stimuli. In other words, there are fewer ways in which an object, a person, or an event can be positive as compared to negative. Oftentimes, there is only one way in which a stimulus can be positive (e.g., a good meal has to have an adequate temperature level, taste intensity, and nutritional value). In contrast, there are many different ways in which a stimulus can be negative (e.g., a bad meal can be too hot or too cold, too spicy or too bland, or too fat or too lean). This higher similarity of positive as compared to negative stimuli is important, as similarity greatly impacts speed and accuracy on virtually all levels of information processing, including attention, classification, categorization, judgment and decision making, and recognition and recall memory. Thus, if the difference in similarity between positive and negative stimuli is a general phenomenon, it predicts and may explain a variety of valence asymmetries in cognitive processing (e.g., positive as compared to negative stimuli are processed faster but less accurately). In my dissertation, I show that the similarity asymmetry is indeed a general phenomenon that is observed in thousands of words and pictures. Further, I show that the similarity asymmetry applies to social groups. Groups stereotyped as average on the two dimensions agency / socio-economic success (A) and conservative-progressive beliefs (B) are stereotyped as positive or high on communion (C), while groups stereotyped as extreme on A and B (e.g., managers, homeless people, punks, and religious people) are stereotyped as negative or low on C. As average groups are more similar to one another than extreme groups, according to this ABC model of group stereotypes, positive groups are mentally represented as more similar to one another than negative groups. Finally, I discuss implications of the ABC model of group stereotypes, pointing to avenues for future research on how stereotype content shapes social perception, cognition, and behavior.
Resumo:
Fishing trials with monofilament gill nets and longlines using small hooks were carried out in Algarve waters (southern Portugal) over a one-year period. Four hook sizes of "Mustad" brand, round bent, flatted sea hooks (Quality 2316 DT, numbers 15, 13, 12 and 11) and four mesh sizes of 25, 30, 35 and 40 mm (bar length) monofilament gill nets were used. Commercially valuable sea breams dominated the longline catches while small pelagics were relatively more important in the gill nets. Significant differences in the catch size frequency distributions of the two gears were found for all the most important species caught by both gears (Boops boops, Diplodus bellottii, Diplodus vulgaris, Pagellus acarne, Pagellus erythrinus, Spondyiosoma cantharus, Scomber japonicus and Scorpaena notata), with longlines catching larger fish and a wider size range than nets. Whereas longline catch size frequency distributions for most species for the different hook sizes were generally highly overlapped, suggesting little or no differences in size selectivity, gill net catch size frequency distributions clearly showed size selection. A variety of models were fitted to the gill net and hook data using the SELECT method, while the parameters of the logistic model were estimated by maximum likelihood for the longline data. The bi-normal model gave the best fits for most of the species caught with gill nets, while the logistic model adequately described hook selectivity. The results of this study show that the two static gears compete for many of the same species and have different impacts in terms of catch composition and size selectivity. This information will I;e useful for the improved management of these small-scale fisheries in which many different gears compete for scarce resources.
Resumo:
Understanding the fluctuations in population abundance is a central question in fisheries. Sardine fisheries is of great importance to Portugal and is data-rich and of primary concern to fisheries managers. In Portugal, sub-stocks of Sardina pilchardus (sardine) are found in different regions: the Northwest (IXaCN), Southwest (IXaCS) and the South coast (IXaS-Algarve). Each of these sardine sub-stocks is affected differently by a unique set of climate and ocean conditions, mainly during larval development and recruitment, which will consequently affect sardine fisheries in the short term. Taking this hypothesis into consideration we examined the effects of hydrographic (river discharge), sea surface temperature, wind driven phenomena, upwelling, climatic (North Atlantic Oscillation) and fisheries variables (fishing effort) on S. pilchardus catch rates (landings per unit effort, LPUE, as a proxy for sardine biomass). A 20-year time series (1989-2009) was used, for the different subdivisions of the Portuguese coast (sardine sub-stocks). For the purpose of this analysis a multi-model approach was used, applying different time series models for data fitting (Dynamic Factor Analysis, Generalised Least Squares), forecasting (Autoregressive Integrated Moving Average), as well as Surplus Production stock assessment models. The different models were evaluated, compared and the most important variables explaining changes in LPUE were identified. The type of relationship between catch rates of sardine and environmental variables varied across regional scales due to region-specific recruitment responses. Seasonality plays an important role in sardine variability within the three study regions. In IXaCN autumn (season with minimum spawning activity, larvae and egg concentrations) SST, northerly wind and wind magnitude were negatively related with LPUE. In IXaCS none of the explanatory variables tested was clearly related with LPUE. In IXaS-Algarve (South Portugal) both spring (period when large abundances of larvae are found) northerly wind and wind magnitude were negatively related with LPUE, revealing that environmental effects match with the regional peak in spawning time. Overall, results suggest that management of small, short-lived pelagic species, such as sardine quotas/sustainable yields, should be adapted to a regional scale because of regional environmental variability.
Resumo:
Amid the trend of rising health expenditure in developed economies, changing the healthcare delivery models is an important point of action for service regulators to contain this trend. Such a change is mostly induced by either financial incentives or regulatory tools issued by the regulators and targeting service providers and patients. This creates a tripartite interaction between service regulators, professionals, and patients that manifests a multi-principal agent relationship, in which professionals are agents to two principals: regulators and patients. This thesis is concerned with such a multi-principal agent relationship in healthcare and attempts to investigate the determinants of the (non-)compliance to regulatory tools in light of this tripartite relationship. In addition, the thesis provides insights into the different institutional, economic, and regulatory settings, which govern the multi-principal agent relationship in healthcare in different countries. Furthermore, the thesis provides and empirically tests a conceptual framework of the possible determinants of (non-)compliance by physicians to regulatory tools issued by the regulator. The main findings of the thesis are first, in a multi-principal agent setting, the utilization of financial incentives to align the objectives of professionals and the regulator is important but not the only solution. This finding is based on the heterogeneity in the financial incentives provided to professionals in different health markets, which does not provide a one-size-fits-all model of financial incentives to influence clinical decisions. Second, soft law tools as clinical practice guidelines (CPGs) are important tools to mitigate the problems of the multi-principal agent setting in health markets as they reduce information asymmetries while preserving the autonomy of professionals. Third, CPGs are complex and heterogeneous and so are the determinants of (non-)compliance to them. Fourth, CPGs work but under conditions. Factors such as intra-professional competition between service providers or practitioners might lead to non-compliance to CPGs – if CPGs are likely to reduce the professional’s utility. Finally, different degrees of soft law mandate have different effects on providers’ compliance. Generally, the stronger the mandate, the stronger the compliance, however, even with a strong mandate, drivers such as intra-professional competition and co-management of patients by different professionals affected the (non-)compliance.
Resumo:
Ground deformation provides valuable insights on subsurface processes with pattens reflecting the characteristics of the source at depth. In active volcanic sites displacements can be observed in unrest phases; therefore, a correct interpretation is essential to assess the hazard potential. Inverse modeling is employed to obtain quantitative estimates of parameters describing the source. However, despite the robustness of the available approaches, a realistic imaging of these reservoirs is still challenging. While analytical models return quick but simplistic results, assuming an isotropic and elastic crust, more sophisticated numerical models, accounting for the effects of topographic loads, crust inelasticity and structural discontinuities, require much higher computational effort and information about the crust rheology may be challenging to infer. All these approaches are based on a-priori source shape constraints, influencing the solution reliability. In this thesis, we present a new approach aimed at overcoming the aforementioned limitations, modeling sources free of a-priori shape constraints with the advantages of FEM simulations, but with a cost-efficient procedure. The source is represented as an assembly of elementary units, consisting in cubic elements of a regular FE mesh loaded with a unitary stress tensors. The surface response due to each of the six stress tensor components is computed and linearly combined to obtain the total displacement field. In this way, the source can assume potentially any shape. Our tests prove the equivalence of the deformation fields due to our assembly and that of corresponding cavities with uniform boundary pressure. Our ability to simulate pressurized cavities in a continuum domain permits to pre-compute surface responses, avoiding remeshing. A Bayesian trans-dimensional inversion algorithm implementing this strategy is developed. 3D Voronoi cells are used to sample the model domain, selecting the elementary units contributing to the source solution and those remaining inactive as part of the crust.
Resumo:
The aim of this thesis is to present exact and heuristic algorithms for the integrated planning of multi-energy systems. The idea is to disaggregate the energy system, starting first with its core the Central Energy System, and then to proceed towards the Decentral part. Therefore, a mathematical model for the generation expansion operations to optimize the performance of a Central Energy System system is first proposed. To ensure that the proposed generation operations are compatible with the network, some extensions of the existing network are considered as well. All these decisions are evaluated both from an economic viewpoint and from an environmental perspective, as specific constraints related to greenhouse gases emissions are imposed in the formulation. Then, the thesis presents an optimization model for solar organic Rankine cycle in the context of transactive energy trading. In this study, the impact that this technology can have on the peer-to-peer trading application in renewable based community microgrids is inspected. Here the consumer becomes a prosumer and engages actively in virtual trading with other prosumers at the distribution system level. Moreover, there is an investigation of how different technological parameters of the solar Organic Rankine Cycle may affect the final solution. Finally, the thesis introduces a tactical optimization model for the maintenance operations’ scheduling phase of a Combined Heat and Power plant. Specifically, two types of cleaning operations are considered, i.e., online cleaning and offline cleaning. Furthermore, a piecewise linear representation of the electric efficiency variation curve is included. Given the challenge of solving the tactical management model, a heuristic algorithm is proposed. The heuristic works by solving the daily operational production scheduling problem, based on the final consumer’s demand and on the electricity prices. The aggregate information from the operational problem is used to derive maintenance decisions at a tactical level.
Resumo:
In this thesis we address a multi-label hierarchical text classification problem in a low-resource setting and explore different approaches to identify the best one for our case. The goal is to train a model that classifies English school exercises according to a hierarchical taxonomy with few labeled data. The experiments made in this work employ different machine learning models and text representation techniques: CatBoost with tf-idf features, classifiers based on pre-trained models (mBERT, LASER), and SetFit, a framework for few-shot text classification. SetFit proved to be the most promising approach, achieving better performance when during training only a few labeled examples per class are available. However, this thesis does not consider all the hierarchical taxonomy, but only the first two levels: to address classification with the classes at the third level further experiments should be carried out, exploring methods for zero-shot text classification, data augmentation, and strategies to exploit the hierarchical structure of the taxonomy during training.
Resumo:
City streets carry a lot of information that can be exploited to improve the quality of the services the citizens receive. For example, autonomous vehicles need to act accordingly to all the element that are nearby the vehicle itself, like pedestrians, traffic signs and other vehicles. It is also possible to use such information for smart city applications, for example to predict and analyze the traffic or pedestrian flows. Among all the objects that it is possible to find in a street, traffic signs are very important because of the information they carry. This information can in fact be exploited both for autonomous driving and for smart city applications. Deep learning and, more generally, machine learning models however need huge quantities to learn. Even though modern models are very good at gener- alizing, the more samples the model has, the better it can generalize between different samples. Creating these datasets organically, namely with real pictures, is a very tedious task because of the wide variety of signs available in the whole world and especially because of all the possible light, orientation conditions and con- ditions in general in which they can appear. In addition to that, it may not be easy to collect enough samples for all the possible traffic signs available, cause some of them may be very rare to find. Instead of collecting pictures manually, it is possible to exploit data aug- mentation techniques to create synthetic datasets containing the signs that are needed. Creating this data synthetically allows to control the distribution and the conditions of the signs in the datasets, improving the quality and quantity of training data that is going to be used. This thesis work is about using copy-paste data augmentation to create synthetic data for the traffic sign recognition task.