985 resultados para Link information
Resumo:
This thesis examines the effects of macroeconomic factors on inflation level and volatility in the Euro Area to improve the accuracy of inflation forecasts with econometric modelling. Inflation aggregates for the EU as well as inflation levels of selected countries are analysed, and the difference between these inflation estimates and forecasts are documented. The research proposes alternative models depending on the focus and the scope of inflation forecasts. I find that models with a Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) in mean process have better explanatory power for inflation variance compared to the regular GARCH models. The significant coefficients are different in EU countries in comparison to the aggregate EU-wide forecast of inflation. The presence of more pronounced GARCH components in certain countries with more stressed economies indicates that inflation volatility in these countries are likely to occur as a result of the stressed economy. In addition, other economies in the Euro Area are found to exhibit a relatively stable variance of inflation over time. Therefore, when analysing EU inflation one have to take into consideration the large differences on country level and focus on those one by one.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
Geochemical and geochronological analyses of samples of surficial Acre Basin sediments and fossils indicate an extensive fluvial-lacustrine system, occupying this region, desiccated slowly during the last glacial cycle (LGC). This research documents direct evidence for aridity in western Amazonia during the LGC and is important in establishing boundary conditions for LGC climate models as well as in correlating marine and continental (LGC) climate conditions.
Resumo:
This study is specifically concerned with the effect of the Enterprise Resource Planning (ERP) on the Business Process Redesign (BPR). Researcher’s experience and the investigation on previous researches imply that BPR and ERP are deeply related to each other and a study to found the mentioned relation further is necessary. In order to elaborate the hypothesis, a case study, in particular Turkish electricity distribution market and the phase of privatization are investigated. Eight companies that have taken part in privatization process and executed BPR serve as cases in this study. During the research, the cases are evaluated through critical success factors on both BPR and ERP. It was seen that combining the ERP Solution features with business processes lead the companies to be successful in ERP and BPR implementation. When the companies’ success and efficiency were compared before and after the ERP implementation, a considerable change was observed in organizational structure. It was spotted that the team composition is important in the success of ERP projects. Additionally, when the ERP is in driver or enabler role, the companies can be considered successful. On the contrary, when the ERP has a neutral role of business processes, the project fails. In conclusion, it can be said that the companies, which have implemented the ERP successfully, have accomplished the goals of the BPR.
Resumo:
Urban mobility is one of the main challenges facing urban areas due to the growing population and to traffic congestion, resulting in environmental pressures. The pathway to urban sustainable mobility involves strengthening of intermodal mobility. The integrated use of different transport modes is getting more and more important and intermodality has been mentioned as a way for public transport compete with private cars. The aim of the current dissertation is to define a set of strategies to improve urban mobility in Lisbon and by consequence reduce the environmental impacts of transports. In order to do that several intermodal practices over Europe were analysed and the transport systems of Brussels and Lisbon were studied and compared, giving special attention to intermodal systems. In the case study was gathered data from both cities in the field, by using and observing the different transport modes, and two surveys were done to the cities users. As concluded by the study, Brussels and Lisbon present significant differences. In Brussels the measures to promote intermodality are evident, while in Lisbon a lot still needs to be done. It also made clear the necessity for improvements in Lisbon’s public transports to a more intermodal passenger transport system, through integration of different transport modes and better information and ticketing system. Some of the points requiring developments are: interchanges’ waiting areas; integration of bicycle in public transport; information about correspondences with other transport modes; real-time information to passengers pre-trip and on-trip, especially in buses and trams. After the identification of the best practices in Brussels and the weaknesses in Lisbon the possibility of applying some of the practices in Brussels to Lisbon was evaluated. Brussels demonstrated to be a good example of intermodality and for that reason some of the recommendations to improve intermodal mobility in Lisbon can follow the practices in place in Brussels.
Resumo:
This paper presents an on-board bidirectional battery charger for Electric Vehicles (EVs), which operates in three different modes: Grid-to- Vehicle (G2V), Vehicle-to-Grid (V2G), and Vehicle-to-Home (V2H). Through these three operation modes, using bidirectional communications based on Information and Communication Technologies (ICT), it will be possible to exchange data between the EV driver and the future smart grids. This collaboration with the smart grids will strengthen the collective awareness systems, contributing to solve and organize issues related with energy resources and power grids. This paper presents the preliminary studies that results from a PhD work related with bidirectional battery chargers for EVs. Thus, in this paper is described the topology of the on-board bidirectional battery charger and the control algorithms for the three operation modes. To validate the topology it was developed a laboratory prototype, and were obtained experimental results for the three operation modes.
Resumo:
This chapter aims at developing a taxonomic framework to classify the studies on the flexible job shop scheduling problem (FJSP). The FJSP is a generalization of the classical job shop scheduling problem (JSP), which is one of the oldest NP-hard problems. Although various solution methodologies have been developed to obtain good solutions in reasonable time for FSJPs with different objective functions and constraints, no study which systematically reviews the FJSP literature has been encountered. In the proposed taxonomy, the type of study, type of problem, objective, methodology, data characteristics, and benchmarking are the main categories. In order to verify the proposed taxonomy, a variety of papers from the literature are classified. Using this classification, several inferences are drawn and gaps in the FJSP literature are specified. With the proposed taxonomy, the aim is to develop a framework for a broad view of the FJSP literature and construct a basis for future studies.
Resumo:
Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.
Resumo:
Customer lifetime value (LTV) enables using client characteristics, such as recency, frequency and monetary (RFM) value, to describe the value of a client through time in terms of profitability. We present the concept of LTV applied to telemarketing for improving the return-on-investment, using a recent (from 2008 to 2013) and real case study of bank campaigns to sell long- term deposits. The goal was to benefit from past contacts history to extract additional knowledge. A total of twelve LTV input variables were tested, un- der a forward selection method and using a realistic rolling windows scheme, highlighting the validity of five new LTV features. The results achieved by our LTV data-driven approach using neural networks allowed an improvement up to 4 pp in the Lift cumulative curve for targeting the deposit subscribers when compared with a baseline model (with no history data). Explanatory knowledge was also extracted from the proposed model, revealing two highly relevant LTV features, the last result of the previous campaign to sell the same product and the frequency of past client successes. The obtained results are particularly valuable for contact center companies, which can improve pre- dictive performance without even having to ask for more information to the companies they serve.
Resumo:
Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.
Resumo:
Forming suitable learning groups is one of the factors that determine the efficiency of collaborative learning activities. However, only a few studies were carried out to address this problem in the mobile learning environments. In this paper, we propose a new approach for an automatic, customized, and dynamic group formation in Mobile Computer Supported Collaborative Learning (MCSCL) contexts. The proposed solution is based on the combination of three types of grouping criteria: learner’s personal characteristics, learner’s behaviours, and context information. The instructors can freely select the type, the number, and the weight of grouping criteria, together with other settings such as the number, the size, and the type of learning groups (homogeneous or heterogeneous). Apart from a grouping mechanism, the proposed approach represents a flexible tool to control each learner, and to manage the learning processes from the beginning to the end of collaborative learning activities. In order to evaluate the quality of the implemented group formation algorithm, we compare its Average Intra-cluster Distance (AID) with the one of a random group formation method. The results show a higher effectiveness of the proposed algorithm in forming homogenous and heterogeneous groups compared to the random method.
Resumo:
Security risk management is by definition, a subjective and complex exercise and it takes time to perform properly. Human resources are fundamental assets for any organization, and as any other asset, they have inherent vulnerabilities that need to be handled, i.e. managed and assessed. However, the nature that characterize the human behavior and the organizational environment where they develop their work turn these task extremely difficult, hard to accomplish and prone to errors. Assuming security as a cost, organizations are usually focused on the efficiency of the security mechanisms implemented that enable them to protect against external attacks, disregarding the insider risks, which are much more difficult to assess. All these demands an interdisciplinary approach in order to combine technical solutions with psychology approaches in order to understand the organizational staff and detect any changes in their behaviors and characteristics. This paper intends to discuss some methodological challenges to evaluate the insider threats and its impacts, and integrate them in a security risk framework, that was defined according to the security standard ISO/IEC_JTC1, to support the security risk management process.
Resumo:
Information security is concerned with the protection of information, which can be stored, processed or transmitted within critical information systems of the organizations, against loss of confidentiality, integrity or availability. Protection measures to prevent these problems result through the implementation of controls at several dimensions: technical, administrative or physical. A vital objective for military organizations is to ensure superiority in contexts of information warfare and competitive intelligence. Therefore, the problem of information security in military organizations has been a topic of intensive work at both national and transnational levels, and extensive conceptual and standardization work is being produced. A current effort is therefore to develop automated decision support systems to assist military decision makers, at different levels in the command chain, to provide suitable control measures that can effectively deal with potential attacks and, at the same time, prevent, detect and contain vulnerabilities targeted at their information systems. The concept and processes of the Case-Based Reasoning (CBR) methodology outstandingly resembles classical military processes and doctrine, in particular the analysis of “lessons learned” and definition of “modes of action”. Therefore, the present paper addresses the modeling and design of a CBR system with two key objectives: to support an effective response in context of information security for military organizations; to allow for scenario planning and analysis for training and auditing processes.
Resumo:
The Childhood protection is a subject with high value for the society, but, the Child Abuse cases are difficult to identify. The process from suspicious to accusation is very difficult to achieve. It must configure very strong evidences. Typically, Health Care services deal with these cases from the beginning where there are evidences based on the diagnosis, but they aren’t enough to promote the accusation. Besides that, this subject it’s highly sensitive because there are legal aspects to deal with such as: the patient privacy, paternity issues, medical confidentiality, among others. We propose a Child Abuses critical knowledge monitor system model that addresses this problem. This decision support system is implemented with a multiple scientific domains: to capture of tokens from clinical documents from multiple sources; a topic model approach to identify the topics of the documents; knowledge management through the use of ontologies to support the critical knowledge sensibility concepts and relations such as: symptoms, behaviors, among other evidences in order to match with the topics inferred from the clinical documents and then alert and log when clinical evidences are present. Based on these alerts clinical personnel could analyze the situation and take the appropriate procedures.