811 resultados para Architecture and Complexity


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this research was to explore a new way of experiencing a performance space using the portability and flexibility of a cargo container. Since the 17th century there has been a split between theater, as a written work, and architecture. Theater has lost its founding essence becoming more about the structure and less about the performance. Contemporary theater designs came through the development of street performances, which developed into theater types such as the Black Box and lately video and projection screening. With the exploration of kinetic uses in architecture and defragmentation of a cargo container there is a new step on the development of theater design. Using a cargo container gave me a familiar object with specific dimensions to start my exploration as well as the possibility of having the theater transported to many sites. The findings demonstrate that there are many unexplored possibilities to create a performance space outside the conventional theater that can promote new types of performances as well as the use of new technologies of video and projection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effective control of production activities in dynamic job shop with predetermined resource allocation for all the jobs entering the system is a unique manufacturing environment, which exists in the manufacturing industry. In this thesis a framework for an Internet based real time shop floor control system for such a dynamic job shop environment is introduced. The system aims to maintain the schedule feasibility of all the jobs entering the manufacturing system under any circumstance. The system is capable of deciding how often the manufacturing activities should be monitored to check for control decisions that need to be taken on the shop floor. The system will provide the decision maker real time notification to enable him to generate feasible alternate solutions in case a disturbance occurs on the shop floor. The control system is also capable of providing the customer with real time access to the status of the jobs on the shop floor. The communication between the controller, the user and the customer is through web based user friendly GUI. The proposed control system architecture and the interface for the communication system have been designed, developed and implemented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasingly, the Information Technology (IT) has been used to sustain the business strategies, causing increased its relevance. Therefore IT governance is seen as one of the priorities of organizations at the time. The search for strategic alignment between business and IT is debated as a factor for business success, but even with that importance, usually the main business managers are reluctant to take responsibility for decisions involving IT, mainly due to the complexity of your infrastructure. Since cloud computing is being seen as an element capable of assisting in the implementation of organizational strategies, because their characteristics enable greater efficiency and agility in IT, and is considered as a new computing paradigm. The main objective of the analyze the relationship between IT governance arrangements and strategic alignment with the infrastructure as a service (IaaS) of public cloud computing. Therefore, an exploratory, descriptive and inferential was developed, with approach to the problem of quantitatively research, with descriptive survey method and cross section. An electronic questionnaire that was applied to the ISACA chapters Associates of São Paulo and the Distrito Federal, totaling 164 respondents was used. The instrument used based on the theories of Weill and Ross (2006) for array of IT governance arrangement; Henderson and Venkatraman (1993) and Luftman (2000), for maturity of the strategic alignment model; and NIST (2011 b), ITGI (2007) and CSA (2010) for infrastructure maturity as a service (IaaS) public in its essential characteristics. As regards the main results, this research proved that with public IaaS decision-making structures have changed, with a greater participation of senior executives in all five key IT decisions (IT governance arrangement array) including more technical decisions as architecture and IT infrastructure. With increased participation of senior executives the decrease was also observed in the share of IT specialists, characterizing the decision process with the duopoly archetype (shared decision). With regard to strategic alignment, it was observed that it changes with cloud computing, and organizations with public IaaS, a maturity of strategic alignment with statistically significant and greater difference when compared to organizations without IaaS. The maturity of public IaaS is at the intermediate level (level 3 - "defined process"), with the elasticity and measurement achieved level 4 - "managed and measurable" It was also possible to infer in organizations with public IaaS, there are positive correlations between the key decisions and the maturity of IaaS, especially at the beginning, architecture and infrastructure, and the archetypes involving senior executives and IT specialists. In the correlation between the maturity and mature strategic alignment of public IaaS therefore the higher the strategic alignment, the greater the maturity of the public IaaS and vice versa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Carmelite friars were the last of the major mendicant orders to be established in Italy. Originally an eremitical order, they arrived from the Holy Land in the 1240s, decades after other mendicant orders, such as the Franciscans and Dominicans, had constructed churches and cultivated patrons in the burgeoning urban centers of central Italy. In a religious market already saturated with friars, the Carmelites distinguished themselves by promoting their Holy Land provenance, eremitical values, and by developing an institutional history claiming to be descendants of the Old Testament prophet Elijah. By the end of the 13th century the order had constructed thriving churches and convents and leveraged itself into a prominent position in the religious community. My dissertation analyzes these early Carmelite churches and convents, as well as the friars’ interactions with patrons, civic governments, and the urban space they occupied. Through three primary case studies – the churches and convents of Pisa, Siena and Florence – I examine the Carmelites’ approach to art, architecture, and urban space as the order transformed its mission from one of solitary prayer to one of active ministry.

My central questions are these: To what degree did the Carmelites’ Holy Land provenance inform the art and architecture they created for their central Italian churches? And to what degree was their visual culture instead a reflection of the mendicant norms of the time?

I have sought to analyze the Carmelites at the institutional level, to determine how the order viewed itself and how it wanted its legacy to develop. I then seek to determine how and if the institutional model was utilized in the artistic and architectural production of the individual convents. The understanding of Carmelite art as a promotional tool for the identity of the order is not a new one, however my work is the first to consider deeply the order’s architectural aspirations. I also consider the order’s relationships with its de facto founding saint, the prophet Elijah, and its patron, the Virgin Mary, in a more comprehensive manner that situates the resultant visual culture into the contemporary theological and historical contexts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In 2010 the architects of thebigairworld participated in the creation of a film about Marcel Duchamp's Etant donnes. The film stages two architectural doctors, Haralambidou and Watson discussing Duchamp's piece with images of the work running in parallel. Off camera but by no means absent from the production the Mobile Studio act as camera man, director and grip.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates how low cost carrier (LCC) developments have affected the traffic and financial performance of UK airports from 2002 to 2014. Considerable growth in traffic was experienced from 2002 to 2007, especially at regional airports as a result of LCC expansion. This was replaced with a more volatile period from 2008 to 2014 where many of the regional airports that experienced the greatest increases in traffic during the early years, then experienced the largest reductions. This has clearly had an impact on their financial well-being, resulting in reduced profits for many airports. It has also meant that many regional airports that seemed like attractive investments as a result of LCC expansion are now less financially appealing, especially given that the LCC sector in the UK appears to be shifting capacity to larger regional airports, and in some cases, London airports.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders (Belgium).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the recent trend for cathedrals in England to develop a wider and more ambitious scope to their event and activity programmes. It sets out to explore the types of events now hosted at cathedrals, to consider barriers to such ambitions and the opportunities presented by event programming to develop new audiences and grow attendances. The research focuses on the 42 Anglican cathedrals of England and has involved a review of recent reports published by church and cathedral organisations, supported by an in-depth review of event activity and objectives at five selected cathedrals in southern England. Despite declining general church attendance in England, cathedrals have enjoyed two decades of attendance growth both as places of worship and as tourist attractions, partly a reflection of a more complex contemporary search for multi-faceted types of spirituality. The paper explores how events can tap into the realm of individual spiritual capital and demonstrates the rich diversity of events now being hosted by cathedrals. The paper offers a new categorisation of ecclesiastical/liturgical events, cultural and community events and openly commercial event activity. Barriers remain but key facilitating factors have been new investment in event expertise and professionalism, encouragement to experiment by key funding bodies such as the Heritage Lottery Fund and the embracing of new forms of spirituality. The diversity of cathedral events reflects a new found growth in the nurturing of “spiritual capital” amongst both worshippers and tourists.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This presentation focuses on methods for the evaluation of complex policies. In particular, it focuses on evaluating interactions between policies and the extent to which two or more interacting policies mutually reinforce or hinder one another, in the area of environmental sustainability. Environmental sustainability is increasingly gaining recognition as a complex policy area, requiring a more systemic perspective and approach (e.g. European Commission, 2011). Current trends in human levels of resource consumption are unsustainable, and single solutions which target isolated issues independently of the broader context have so far fallen short. Instead there is a growing call among both academics and policy practitioners for systemic change which acknowledges and engages with the complex interactions, barriers and opportunities across the different actors, sectors, and drivers of production and consumption. Policy mixes, and the combination and ordering of policies within, therefore become an important focus for those aspiring to design and manage transitions to sustainability. To this end, we need a better understanding of the interactions, synergies and conflicts between policies (Cunningham et al., 2013; Geels, 2014). As a contribution to this emerging field of research and to inform its next steps, I present a review on what methods are available to try to quantify the impacts of complex policy interactions, since there is no established method among practitioners, and I explore the merits or value of such attempts. The presentation builds on key works in the field of complexity science (e.g. Anderson, 1972), revisiting and combining these with more recent contributions in the emerging field of policy and complex systems, and evaluation (e.g. Johnstone et al., 2010). With a coalition of UK Government departments, agencies and Research Councils soon to announce the launch of a new internationally-leading centre to pioneer, test and promote innovative and inclusive methods for policy evaluation across the energy-environment-food nexus, the contribution is particularly timely.