886 resultados para Recontextualised found object


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heat transfer is considered as one of the most critical issues for design and implement of large-scale microwave heating systems, in which improvement of the microwave absorption of materials and suppression of uneven temperature distribution are the two main objectives. The present work focuses on the analysis of heat transfer in microwave heating for achieving highly efficient microwave assisted steelmaking through the investigations on the following aspects: (1) characterization of microwave dissipation using the derived equations, (2) quantification of magnetic loss, (3) determination of microwave absorption properties of materials, (4) modeling of microwave propagation, (5) simulation of heat transfer, and (6) improvement of microwave absorption and heating uniformity. Microwave heating is attributed to the heat generation in materials, which depends on the microwave dissipation. To theoretically characterize microwave heating, simplified equations for determining the transverse electromagnetic mode (TEM) power penetration depth, microwave field attenuation length, and half-power depth of microwaves in materials having both magnetic and dielectric responses were derived. It was followed by developing a simplified equation for quantifying magnetic loss in materials under microwave irradiation to demonstrate the importance of magnetic loss in microwave heating. The permittivity and permeability measurements of various materials, namely, hematite, magnetite concentrate, wüstite, and coal were performed. Microwave loss calculations for these materials were carried out. It is suggested that magnetic loss can play a major role in the heating of magnetic dielectrics. Microwave propagation in various media was predicted using the finite-difference time-domain method. For lossy magnetic dielectrics, the dissipation of microwaves in the medium is ascribed to the decay of both electric and magnetic fields. The heat transfer process in microwave heating of magnetite, which is a typical magnetic dielectric, was simulated by using an explicit finite-difference approach. It is demonstrated that the heat generation due to microwave irradiation dominates the initial temperature rise in the heating and the heat radiation heavily affects the temperature distribution, giving rise to a hot spot in the predicted temperature profile. Microwave heating at 915 MHz exhibits better heating homogeneity than that at 2450 MHz due to larger microwave penetration depth. To minimize/avoid temperature nonuniformity during microwave heating the optimization of object dimension should be considered. The calculated reflection loss over the temperature range of heating is found to be useful for obtaining a rapid optimization of absorber dimension, which increases microwave absorption and achieves relatively uniform heating. To further improve the heating effectiveness, a function for evaluating absorber impedance matching in microwave heating was proposed. It is found that the maximum absorption is associated with perfect impedance matching, which can be achieved by either selecting a reasonable sample dimension or modifying the microwave parameters of the sample.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tenascins represent a family of extracellular matrix glycoproteins with distinctive expression patterns. Here we have analyzed the most recently described member, tenascin-W, in breast cancer. Mammary tumors isolated from transgenic mice expressing hormone-induced oncogenes reveal tenascin-W in the stroma around lesions with a high likelihood of metastasis. The presence of tenascin-W was correlated with the expression of its putative receptor, alpha8 integrin. HC11 cells derived from normal mammary epithelium do not express alpha8 integrin and fail to cross tenascin-W-coated filters. However, 4T1 mammary carcinoma cells do express alpha8 integrin and their migration is stimulated by tenascin-W. The expression of tenascin-W is induced by BMP-2 but not by TGF-beta1, though the latter is a potent inducer of tenascin-C. The expression of tenascin-W is dependent on p38MAPK and JNK signaling pathways. Since preinflammatory cytokines also act through p38MAPK and JNK signaling pathways, the possible role of TNF-alpha in tenascin-W expression was also examined. TNF-alpha induced the expression of both tenascin-W and tenascin-C, and this induction was p38MAPK- and cyclooxygenase-dependent. Our results show that tenascin-W may be a useful diagnostic marker for breast malignancies, and that the induction of tenascin-W in the tumor stroma may contribute to the invasive behavior of tumor cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECT: The aim of this study was to analyze decompressive craniectomy (DC) in the setting of subarachnoid hemorrhage (SAH) with bleeding, infarction, or brain swelling as the underlying pathology in a large cohort of consecutive patients. METHODS: Decompressive craniectomy was performed in 79 of 939 patients with SAH. Patients were stratified according to the indication for DC: 1) primary brain swelling without or 2) with additional intracerebral hematoma, 3) secondary brain swelling without rebleeding or infarcts, and 4) secondary brain swelling with infarcts or 5) with rebleeding. Outcome was assessed according to the modified Rankin Scale (mRS) at 6 months (mRS Score 0-3 favorable vs 4-6 unfavorable). RESULTS: Overall, 61 (77.2%) of 79 patients who did and 292 (34%) of the 860 patients who did not undergo DC had a poor clinical grade on admission (World Federation of Neurosurgical Societies Grade IV-V, p < 0.0001). A favorable outcome was attained in 21 (26.6%) of 79 patients who had undergone DC. In a comparison of favorable outcomes in patients with primary (28.0%) or secondary DC (25.5%), no difference could be found (p = 0.8). Subgroup analysis with respect to the underlying indication for DC (brain swelling vs bleeding vs infarction) revealed no difference in the rate of favorable outcomes. On multivariate analysis, acute hydrocephalus (p = 0.009) and clinical signs of herniation (p = 0.02) were significantly associated with an unfavorable outcome. CONCLUSIONS: Based on the data in this study the authors concluded that primary as well as secondary craniectomy might be warranted, regardless of the underlying etiology (hemorrhage, infarction, or brain swelling) and admission clinical grade of the patient. The time from the onset of intractable intracranial pressure to DC seems to be crucial for a favorable outcome, even when a DC is performed late in the disease course after SAH.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we compare the performance of two image classification paradigms (object- and pixel-based) for creating a land cover map of Asmara, the capital of Eritrea and its surrounding areas using a Landsat ETM+ imagery acquired in January 2000. The image classification methods used were maximum likelihood for the pixel-based approach and Bhattacharyya distance for the object-oriented approach available in, respectively, ArcGIS and SPRING software packages. Advantages and limitations of both approaches are presented and discussed. Classifications outputs were assessed using overall accuracy and Kappa indices. Pixel- and object-based classification methods result in an overall accuracy of 78% and 85%, respectively. The Kappa coefficient for pixel- and object-based approaches was 0.74 and 0.82, respectively. Although pixel-based approach is the most commonly used method, assessment and visual interpretation of the results clearly reveal that the object-oriented approach has advantages for this specific case-study.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Behavioral reflection is crucial to support for example functional upgrades, on-the-fly debugging, or monitoring critical applications. However the use of reflective features can lead to severe problems due to infinite metacall recursion even in simple cases. This is especially a problem when reflecting on core language features since there is a high chance that such features are used to implement the reflective behavior itself. In this paper we analyze the problem of infinite meta-object call recursion and solve it by providing a first class representation of meta-level execution: at any point in the execution of a system it can be determined if we are operating on a meta-level or base level so that we can prevent infinite recursion. We present how meta-level execution can be represented by a meta-context and how reflection becomes context-aware. Our solution makes it possible to freely apply behavioral reflection even on system classes: the meta-context brings stability to behavioral reflection. We validate the concept with a robust implementation and we present benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that object flow is necessary to analyze if we are to understand the runtime of an object-oriented application. We propose and detail Object Flow Analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As object-oriented languages are extended with novel modularization mechanisms, better underlying models are required to implement these high-level features. This paper describes CELL, a language model that builds on delegation-based chains of object fragments. Composition of groups of cells is used: 1) to represent objects, 2) to realize various forms of method lookup, and 3) to keep track of method references. A running prototype of CELL is provided and used to realize the basic kernel of a Smalltalk system. The paper shows, using several examples, how higher-level features such as traits can be supported by the lower-level model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid growth of object-oriented development over the past twenty years has given rise to many object-oriented systems that are large, complex and hard to maintain. Object-Oriented Reengineering Patterns addresses the problem of understanding and reengineering such object-oriented legacy systems. This book collects and distills successful techniques in planning a reengineering project, reverse-engineering, problem detection, migration strategies and software redesign. The material in this book is presented as a set of "reengineering patterns" --- recurring solutions that experts apply while reengineering and maintaining object-oriented systems. The principles and techniques described in this book have been observed and validated in a number of industrial projects, and reflect best practice in object-oriented reengineering.