958 resultados para MODEL TESTS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lymphocyte stimulation tests (LST) were performed in five dogs sensitised with ovalbumin (OVA) and seven healthy dogs. In addition, all five OVA-sensitised and two control dogs were tested after two in vivo provocations with OVA-containing eye drops. The isolated cells were suspended in culture media containing OVA and were cultured for up to 12 days. Proliferation was measured as reduction in 5,6-carboxylfluorescein diacetate succinimidyl ester (CFSE) intensity by flow cytometry on days 0, 3, 6, 9 and 12. A cell proliferation index (CPI) for each day and the area under the curve (AUC) of the CPI was calculated for each dog. All OVA-sensitised dogs demonstrated increased erythema after conjunctival OVA application. The presence of OVA-specific lymphocytes was demonstrated in 2/5 OVA-sensitised dogs before and 4/5 after in vivo provocation. Using the AUC, the difference between OVA-sensitised and control dogs was significant in all three LST before in vivo provocation (P<0.05) and borderline significant (P=0.053) in 2/3 LST after provocation. The most significant difference in CPI was observed after 9 days of culture (P=0.001). This pilot study indicates that the LST allows detection of rare antigen specific memory T-cells in dogs previously sensitised to, but not concurrently undergoing challenge by a specific antigen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanical testing of the periodontal ligament requires a practical experimental model. Bovine teeth are advantageous in terms of size and availability, but information is lacking as to the anatomy and histology of their periodontium. The aim of this study, therefore, was to characterize the anatomy and histology of the attachment apparatus in fully erupted bovine mandibular first molars. A total of 13 teeth were processed for the production of undecalcified ground sections and decalcified semi-thin sections, for NaOH maceration, and for polarized light microscopy. Histomorphometric measurements relevant to the mechanical behavior of the periodontal ligament included width, number, size and area fraction of blood vessels and fractal analysis of the two hard-soft tissue interfaces. The histological and histomorphometric analyses were performed at four different root depths and at six circumferential locations around the distal and mesial roots. The variety of techniques applied provided a comprehensive view of the tissue architecture of the bovine periodontal ligament. Marked regional variations were observed in width, surface geometry of the two bordering hard tissues (cementum and alveolar bone), structural organization of the principal periodontal ligament connective tissue fibers, size, number and numerical density of blood vessels in the periodontal ligament. No predictable pattern was observed, except for a statistically significant increase in the area fraction of blood vessels from apical to coronal. The periodontal ligament width was up to three times wider in bovine teeth than in human teeth. The fractal analyses were in agreement with the histological observations showing frequent signs of remodeling activity in the alveolar bone - a finding which may be related to the magnitude and direction of occlusal forces in ruminants. Although samples from the apical root portion are not suitable for biomechanical testing, all other levels in the buccal and lingual aspects of the mesial and distal roots may be considered. The bucco-mesial aspect of the distal root appears to be the most suitable location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A phenomenological transition film evaporation model was introduced to a pore network model with the consideration of pore radius, contact angle, non-isothermal interface temperature, microscale fluid flows and heat and mass transfers. This was achieved by modeling the transition film region of the menisci in each pore throughout the porous transport layer of a half-cell polymer electrolyte membrane (PEM) fuel cell. The model presented in this research is compared with the standard diffusive fuel cell modeling approach to evaporation and shown to surpass the conventional modeling approach in terms of predicting the evaporation rates in porous media. The current diffusive evaporation models used in many fuel cell transport models assumes a constant evaporation rate across the entire liquid-air interface. The transition film model was implemented into the pore network model to address this issue and create a pore size dependency on the evaporation rates. This is accomplished by evaluating the transition film evaporation rates determined by the kinetic model for every pore containing liquid water in the porous transport layer (PTL). The comparison of a transition film and diffusive evaporation model shows an increase in predicted evaporation rates for smaller pore sizes with the transition film model. This is an important parameter when considering the micro-scaled pore sizes seen in the PTL and becomes even more substantial when considering transport in fuel cells containing an MPL, or a large variance in pore size. Experimentation was performed to validate the transition film model by monitoring evaporation rates from a non-zero contact angle water droplet on a heated substrate. The substrate was a glass plate with a hydrophobic coating to reduce wettability. The tests were performed at a constant substrate temperature and relative humidity. The transition film model was able to accurately predict the drop volume as time elapsed. By implementing the transition film model to a pore network model the evaporation rates present in the PTL can be more accurately modeled. This improves the ability of a pore network model to predict the distribution of liquid water and ultimately the level of flooding exhibited in a PTL for various operating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an important Civil Engineering material, asphalt concrete (AC) is commonly used to build road surfaces, airports, and parking lots. With traditional laboratory tests and theoretical equations, it is a challenge to fully understand such a random composite material. Based on the discrete element method (DEM), this research seeks to develop and implement computer models as research approaches for improving understandings of AC microstructure-based mechanics. In this research, three categories of approaches were developed or employed to simulate microstructures of AC materials, namely the randomly-generated models, the idealized models, and image-based models. The image-based models were recommended for accurately predicting AC performance, while the other models were recommended as research tools to obtain deep insight into the AC microstructure-based mechanics. A viscoelastic micromechanical model was developed to capture viscoelastic interactions within the AC microstructure. Four types of constitutive models were built to address the four categories of interactions within an AC specimen. Each of the constitutive models consists of three parts which represent three different interaction behaviors: a stiffness model (force-displace relation), a bonding model (shear and tensile strengths), and a slip model (frictional property). Three techniques were developed to reduce the computational time for AC viscoelastic simulations. It was found that the computational time was significantly reduced to days or hours from years or months for typical three-dimensional models. Dynamic modulus and creep stiffness tests were simulated and methodologies were developed to determine the viscoelastic parameters. It was found that the DE models could successfully predict dynamic modulus, phase angles, and creep stiffness in a wide range of frequencies, temperatures, and time spans. Mineral aggregate morphology characteristics (sphericity, orientation, and angularity) were studied to investigate their impacts on AC creep stiffness. It was found that aggregate characteristics significantly impact creep stiffness. Pavement responses and pavement-vehicle interactions were investigated by simulating pavement sections under a rolling wheel. It was found that wheel acceleration, steadily moving, and deceleration significantly impact contact forces. Additionally, summary and recommendations were provided in the last chapter and part of computer programming codes wree provided in the appendixes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The three-step test is central to the regulation of copyright limitations at the international level. Delineating the room for exemptions with abstract criteria, the three-step test is by far the most important and comprehensive basis for the introduction of national use privileges. It is an essential, flexible element in the international limitation infrastructure that allows national law makers to satisfy domestic social, cultural, and economic needs. Given the universal field of application that follows from the test’s open-ended wording, the provision creates much more breathing space than the more specific exceptions recognized in international copyright law. EC copyright legislation, however, fails to take advantage of the flexibility inherent in the three-step test. Instead of using the international provision as a means to open up the closed EC catalogue of permissible exceptions, offer sufficient breathing space for social, cultural, and economic needs, and enable EC copyright law to keep pace with the rapid development of the Internet, the Copyright Directive 2001/29/EC encourages the application of the three-step test to further restrict statutory exceptions that are often defined narrowly in national legislation anyway. In the current online environment, however, enhanced flexibility in the field of copyright limitations is indispensable. From a social and cultural perspective, the web 2.0 promotes and enhances freedom of expression and information with its advanced search engine services, interactive platforms, and various forms of user-generated content. From an economic perspective, it creates a parallel universe of traditional content providers relying on copyright protection, and emerging Internet industries whose further development depends on robust copyright limita- tions. In particular, the newcomers in the online market – social networking sites, video forums, and virtual worlds – promise a remarkable potential for economic growth that has already attracted the attention of the OECD. Against this background, the time is ripe to debate the introduction of an EC fair use doctrine on the basis of the three-step test. Otherwise, EC copyright law is likely to frustrate important opportunities for cultural, social, and economic development. To lay groundwork for the debate, the differences between the continental European and the Anglo-American approach to copyright limitations (section 1), and the specific merits of these two distinct approaches (section 2), will be discussed first. An analysis of current problems that have arisen under the present dysfunctional EC system (section 3) will then serve as a starting point for proposing an EC fair use doctrine based on the three-step test (section 4). Drawing conclusions, the international dimension of this fair use proposal will be considered (section 5).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new anisotropic elastic-viscoplastic damage constitutive model for bone is proposed using an eccentric elliptical yield criterion and nonlinear isotropic hardening. A micromechanics-based multiscale homogenization scheme proposed by Reisinger et al. is used to obtain the effective elastic properties of lamellar bone. The dissipative process in bone is modeled as viscoplastic deformation coupled to damage. The model is based on an orthotropic ecuntric elliptical criterion in stress space. In order to simplify material identification, an eccentric elliptical isotropic yield surface was defined in strain space, which is transformed to a stress-based criterion by means of the damaged compliance tensor. Viscoplasticity is implemented by means of the continuous Perzyna formulation. Damage is modeled by a scalar function of the accumulated plastic strain D(κ) , reducing all element s of the stiffness matrix. A polynomial flow rule is proposed in order to capture the rate-dependent post-yield behavior of lamellar bone. A numerical algorithm to perform the back projection on the rate-dependent yield surface has been developed and implemented in the commercial finite element solver Abaqus/Standard as a user subroutine UMAT. A consistent tangent operator has been derived and implemented in order to ensure quadratic convergence. Correct implementation of the algorithm, convergence, and accuracy of the tangent operator was tested by means of strain- and stress-based single element tests. A finite element simulation of nano- indentation in lamellar bone was finally performed in order to show the abilities of the newly developed constitutive model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT: Fourier transform infrared spectroscopy (FTIRS) can provide detailed information on organic and minerogenic constituents of sediment records. Based on a large number of sediment samples of varying age (0�340 000 yrs) and from very diverse lake settings in Antarctica, Argentina, Canada, Macedonia/Albania, Siberia, and Sweden, we have developed universally applicable calibration models for the quantitative determination of biogenic silica (BSi; n = 816), total inorganic carbon (TIC; n = 879), and total organic carbon (TOC; n = 3164) using FTIRS. These models are based on the differential absorbance of infrared radiation at specific wavelengths with varying concentrations of individual parameters, due to molecular vibrations associated with each parameter. The calibration models have low prediction errors and the predicted values are highly correlated with conventionally measured values (R = 0.94�0.99). Robustness tests indicate the accuracy of the newly developed FTIRS calibration models is similar to that of conventional geochemical analyses. Consequently FTIRS offers a useful and rapid alternative to conventional analyses for the quantitative determination of BSi, TIC, and TOC. The rapidity, cost-effectiveness, and small sample size required enables FTIRS determination of geochemical properties to be undertaken at higher resolutions than would otherwise be possible with the same resource allocation, thus providing crucial sedimentological information for climatic and environmental reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Impaired cognition is an important dimension in psychosis and its at-risk states. Research on the value of impaired cognition for psychosis prediction in at-risk samples, however, mainly relies on study-specific sample means of neurocognitive tests, which unlike widely available general test norms are difficult to translate into clinical practice. The aim of this study was to explore the combined predictive value of at-risk criteria and neurocognitive deficits according to test norms with a risk stratification approach. Method: Potential predictors of psychosis (neurocognitive deficits and at-risk criteria) over 24 months were investigated in 97 at-risk patients. Results: The final prediction model included (1) at-risk criteria (attenuated psychotic symptoms plus subjective cognitive disturbances) and (2) a processing speed deficit (digit symbol test). The model was stratified into 4 risk classes with hazard rates between 0.0 (both predictors absent) and 1.29 (both predictors present). Conclusions: The combination of a processing speed deficit and at-risk criteria provides an optimized stratified risk assessment. Based on neurocognitive test norms, the validity of our proposed 3 risk classes could easily be examined in independent at-risk samples and, pending positive validation results, our approach could easily be applied in clinical practice in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Late long-term potentiation (L-LTP) denotes long-lasting strengthening of synapses between neurons. L-LTP appears essential for the formation of long-term memory, with memories at least partly encoded by patterns of strengthened synapses. How memories are preserved for months or years, despite molecular turnover, is not well understood. Ongoing recurrent neuronal activity, during memory recall or during sleep, has been hypothesized to preferentially potentiate strong synapses, preserving memories. This hypothesis has not been evaluated in the context of a mathematical model representing ongoing activity and biochemical pathways important for L-LTP. In this study, ongoing activity was incorporated into two such models - a reduced model that represents some of the essential biochemical processes, and a more detailed published model. The reduced model represents synaptic tagging and gene induction simply and intuitively, and the detailed model adds activation of essential kinases by Ca(2+). Ongoing activity was modeled as continual brief elevations of Ca(2+). In each model, two stable states of synaptic strength/weight resulted. Positive feedback between synaptic weight and the amplitude of ongoing Ca(2+) transients underlies this bistability. A tetanic or theta-burst stimulus switches a model synapse from a low basal weight to a high weight that is stabilized by ongoing activity. Bistability was robust to parameter variations in both models. Simulations illustrated that prolonged periods of decreased activity reset synaptic strengths to low values, suggesting a plausible forgetting mechanism. However, episodic activity with shorter inactive intervals maintained strong synapses. Both models support experimental predictions. Tests of these predictions are expected to further understanding of how neuronal activity is coupled to maintenance of synaptic strength. Further investigations that examine the dynamics of activity and synaptic maintenance can be expected to help in understanding how memories are preserved for up to a lifetime in animals including humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the course of this study, stiffness of a fibril array of mineralized collagen fibrils modeled with a mean field method was validated experimentally at site-matched two levels of tissue hierarchy using mineralized turkey leg tendons (MTLT). The applied modeling approaches allowed to model the properties of this unidirectional tissue from nanoscale (mineralized collagen fibrils) to macroscale (mineralized tendon). At the microlevel, the indentation moduli obtained with a mean field homogenization scheme were compared to the experimental ones obtained with microindentation. At the macrolevel, the macroscopic stiffness predicted with micro finite element (μFE) models was compared to the experimental stiffness measured with uniaxial tensile tests. Elastic properties of the elements in μFE models were injected from the mean field model or two-directional microindentations. Quantitatively, the indentation moduli can be properly predicted with the mean-field models. Local stiffness trends within specific tissue morphologies are very weak, suggesting additional factors responsible for the stiffness variations. At macrolevel, the μFE models underestimate the macroscopic stiffness, as compared to tensile tests, but the correlations are strong.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CHaracterizing ExOPlanet Satellite (CHEOPS) is an ESA Small Mission whose launch is planned for the end of 2017. It is a Ritchey-Chretien telescope with a 320 mm aperture providing a FoV of 0.32 degrees, which will target nearby bright stars already known to host planets, and measure, through ultrahigh precision photometry, the radius of exo-planets, allowing to determine their composition. This paper will present the details of the AIV plan for a demonstration model of the CHEOPS Telescope with equivalent structure but different CTEs. Alignment procedures, needed GSEs and devised verification tests will be described and a path for the AIV of the flight model, which will take place at industries premises, will be sketched. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).