986 resultados para enterprise architecture
Resumo:
Focussing on Paul Rudolph’s Art & Architecture Building at Yale, this thesis demonstrates how the building synthesises the architect’s attitude to architectural education, urbanism and materiality. It tracks the evolution of the building from its origins – which bear a relationship to Rudolph’s pedagogical ideas – to later moments when its occupants and others reacted to it in a series of ways that could never have been foreseen. The A&A became the epicentre of the university’s counter culture movement before it was ravaged by a fire of undetermined origins. Arguably, it represents the last of its kind in American architecture, a turning point at the threshold of postmodernism. Using an archive that was only made available to researchers in 2009, this is the first study to draw extensively on the research files of the late architectural writer and educator, C. Ray Smith. Smith’s 1981 manuscript about the A&A entitled “The Biography of a Building,” was never published. The associated research files and transcripts of discussions with some thirty interviewees, including Rudolph, provide a previously unavailable wealth of information. Following Smith’s methodology, meetings were recorded with those involved in the A&A including, where possible, some of Smith’s original interviewees. When placed within other significant contexts – the physicality of the building itself as well as the literature which surrounds it – these previously untold accounts provide new perspectives and details, which deepen the understanding of the building and its place within architectural discourse. Issues revealed include the importance of the influence of Louis Kahn’s Yale Art Gallery and Yale’s Collegiate Gothic Campus on the building’s design. Following a tumultuous first fifty years, the A&A remains an integral part of the architectural education of Yale students and, furthermore, constitutes an important didactic tool for all students of architecture.
Resumo:
The aging population in many countries brings into focus rising healthcare costs and pressure on conventional healthcare services. Pervasive healthcare has emerged as a viable solution capable of providing a technology-driven approach to alleviate such problems by allowing healthcare to move from the hospital-centred care to self-care, mobile care, and at-home care. The state-of-the-art studies in this field, however, lack a systematic approach for providing comprehensive pervasive healthcare solutions from data collection to data interpretation and from data analysis to data delivery. In this thesis we introduce a Context-aware Real-time Assistant (CARA) architecture that integrates novel approaches with state-of-the-art technology solutions to provide a full-scale pervasive healthcare solution with the emphasis on context awareness to help maintaining the well-being of elderly people. CARA collects information about and around the individual in a home environment, and enables accurately recognition and continuously monitoring activities of daily living. It employs an innovative reasoning engine to provide accurate real-time interpretation of the context and current situation assessment. Being mindful of the use of the system for sensitive personal applications, CARA includes several mechanisms to make the sophisticated intelligent components as transparent and accountable as possible, it also includes a novel cloud-based component for more effective data analysis. To deliver the automated real-time services, CARA supports interactive video and medical sensor based remote consultation. Our proposal has been validated in three application domains that are rich in pervasive contexts and real-time scenarios: (i) Mobile-based Activity Recognition, (ii) Intelligent Healthcare Decision Support Systems and (iii) Home-based Remote Monitoring Systems.
Resumo:
Our research follows a design science approach to develop a method that supports the initialization of ES implementation projects – the chartering phase. This project phase is highly relevant for implementation success, but is understudied in IS research. In this paper, we derive design principles for a chartering method based on a systematic review of ES implementation literature and semi-structured expert interviews. Our analysis identifies differences in the importance of certain success factors depending on the system type. The proposed design principles are built on these factors and are linked to chartering key activities. We specifically consider system-type-specific chartering aspects for process-centric Business Intelligence & Analytics (BI&A) systems, which are an emerging class of systems at the intersection of BI&A and business process management. In summary, this paper proposes design principles for a chartering method – considering specifics of process-centric BI&A.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
The conception of the FUELCON architecture, of a composite tool for the generation and validation of patterns for assigning fuel assemblies to the positions in the grid of a reactor core section, has undergone an evolution throughout the history of the project. Different options for various subtask were possible, envisioned, or actually explored or adopted. We project these successive, or even concomitant configurations of the architecture, into a meta-architecture, which quite not by chance happens to reflect basic choices in the field's history over the last decade.
Resumo:
This paper introduces a few architectural concepts from FUELGEN, that generates a "cloud" of reload patterns, like the generator in the FUELCON expert system, but unlike that generator, is based on a genetic algorithm. There are indications FUELGEN may outperform FUELCON and other tools as reported in the literature, in well-researched case studies, but careful comparisons have to be carried out. This paper complements the information in two other recent papers on FUELGEN. Moreover, a sequel project is outlined.
Resumo:
We continue the discussion of the decision points in the FUELCON metaarchitecture. Having discussed the relation of the original expert system to its sequel projects in terms of an AND/OR tree, we consider one further domain for a neural component: parameter prediction downstream of the core reload candidate pattern generator, thus, a replacement for the NOXER simulator currently in use in the project.
Resumo:
When designing a new passenger ship or modifying an existing design, how do we ensure that the proposed design and crew emergency procedures are safe from an evacuation point of view? In the wake of major maritime disasters such as the Herald of Free Enterprise and the Estonia and in light of the growth in the numbers of high density, high-speed ferries and large capacity cruise ships, issues concerned with the evacuation of passengers and crew at sea are receiving renewed interest. In the maritime industry, ship evacuation models offer the promise to quickly and efficiently bring evacuation considerations into the design phase, while the ship is "on the drawing board". maritimeEXODUS-winner of the BCS, CITIS and RINA awards - is such a model. Features such as the ability to realistically simulate human response to fire, the capability to model human performance in heeled orientations, a virtual reality environment that produces realistic visualisations of the modelled scenarios and with an integrated abandonment model, make maritimeEXODUS a truly unique tool for assessing the evacuation capabilities of all types of vessels under a variety of conditions. This paper describes the maritimeEXODUS model, the SHEBA facility from which data concerning passenger/crew performance in conditions of heel is derived and an example application demonstrating the models use in performing an evacuation analysis for a large passenger ship partially based on the requirements of MSC circular 1033.
Resumo:
When designing a new passenger ship or modifying an existing design, how do we ensure that the proposed design and crew emergency procedures are safe from an evacuation resulting from fire or other incident? In the wake of major maritime disasters such as the Scandinavian Star, Herald of Free Enterprise, Estonia and in light of the growth in the number of high density, high-speed ferries and large capacity cruise ships, issues concerning the evacuation of passengers and crew at sea are receiving renewed interest. Fire and evacuation models with features such as the ability to realistically simulate the spread of heat and smoke and the human response to fire as well as the capability to model human performance in heeled orientations linked to a virtual reality environment that produces realistic visualisations of the modelled scenarios are now available and can be used to aid the engineer in assessing ship design and procedures. This paper describes the maritimeEXODUS ship evacuation and the SMARTFIRE fire simulation model and provides an example application demonstrating the use of the models in performing fire and evacuation analysis for a large passenger ship partially based on the requirements of MSC circular 1033
Resumo:
When designing a new passenger ship or modifying an existing design, how do we ensure that the proposed design and crew emergency procedures are safe from an evacuation resulting from fire or other incident? In the wake of major maritime disasters such as the Scandinavian Star, Herald of Free Enterprise, Estonia and in light of the growth in the numbers of high density high-speed ferries and large capacity cruise ships, issues concerning the evacuation of passengers and crew at sea are receiving renewed interest. Fire and evacuation models with features such as the ability to realistically simulate the spread of fire and fire suppression systems and the human response to fire sas well as the capability to model human performance in heeled orientations linked to a virtual reality environment that produces realistic visualisations of modelled scenarios are now available and can be used to aid the engineer in assessing ship design and procedures. This paper describes the maritmeEXODUS ship evacuation and the SMARTFIRE fire simulation model and provides an example application demonstrating the use of the models in performing fire and evacuation analysis for a large passenger ship partially based on the requirements of MSC circular 1033. The fire simulations include the action of a water mist system.
Resumo:
This paper describes the use of a blackboard architecture for building a hybrid case based reasoning (CBR) system. The Smartfire fire field modelling package has been built using this architecture and includes a CBR component. It allows the integration into the system of qualitative spatial reasoning knowledge from domain experts. The system can be used for the automatic set-up of fire field models. This enables fire safety practitioners who are not expert in modelling techniques to use a fire modelling tool. The paper discusses the integrating powers of the architecture, which is based on a common knowledge representation comprising a metric diagram and place vocabulary and mechanisms for adaptation and conflict resolution built on the Blackboard.
Resumo:
This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases using agents. The adaptive CBR process and the architecture of the system are presented. A case study is presented to illustrate and evaluate the approach. The process of creating and maintaining the dynamic data structures is discussed. The similarity metrics employed by the system are used to support the process of optimisation of the collaboration between the agents which is based on the use of a blackboard architecture. The blackboard architecture is shown to support the efficient collaboration between the agents to achieve an efficient overall CBR solution, while using case-based reasoning methods to allow the overall system to adapt and “learn” new collaborative strategies for achieving the aims of the overall CBR problem solving process.