945 resultados para Limit analysis
Resumo:
This investigation is in two parts, theory and experimental verification. (1) Theoretical Study In this study it is, for obvious reasons, necessary to analyse the concept of formability first. For the purpose of the present investigation it is sufficient to define the four aspects of formability as follows: (a) the formability of the material at a critical section, (b) the formability of the material in general, (c) process efficiency, (d) proportional increase in surface area. A method of quantitative assessment is proposed for each of the four aspects of formability. The theoretical study also includes the distinction between coaxial and non-coaxial strains which occur, respectively, in axisymmetrical and unsymmetrical forming processes and the inadequacy of the circular grid system for the assessment of formability is explained in the light of this distinction. (2) Experimental Study As one of the bases of the experimental work, the determination of the end point of a forming process, which sets the limit to the formability of the work material, is discussed. The effects of three process parameters on draw-in are shown graphically. Then the delay of fracture in sheet metal forming resulting from draw-in is analysed in kinematical terms, namely, through the radial displacements, the radial and the circumferential strains, and the projected thickness of the workpiece. Through the equilibrium equation of the membrane stresses, the effect on the shape of the unsupported region of the workpiece, and hence the position of the critical section is explained. Then, the effect of draw-in on the four aspects of formability is discussed throughout this investigation. The triangular coordinate system is used to present and analyse the triaxial strains involved. This coordinate system has the advantage of showing all the three principal strains in a material simultaneously, as well as representing clearly the many types of strains involved in sheet metal work.
Resumo:
Purpose: Published data indicate that the polar lipid content of human meibomian gland secretions (MGS) could be anything between 0.5% and 13% of the total lipid. The tear film phospholipid composition has not been studied in great detail and it has been understood that the relative proportions of lipids in MGS would be maintained in the tear film. The purpose of this work was to determine the concentration of phospholipids in the human tear film. Methods: Liquid chromatography mass spectrometry (LCMS) and thin layer chromatography (TLC) were used to determine the concentration of phospholipid in the tear film. Additionally, an Amplex Red phosphatidylcholine-specific phospholipase C (PLC) assay kit was used for determination of the activity of PLC in the tear film. Results: Phospholipids were not detected in any of the tested human tear samples with the low limit of detection being 1.3 µg/mL for TLC and 4 µg/mL for liquid chromatography mass spectrometry. TLC indicated that diacylglycerol (DAG) may be present in the tear film. PLC was in the tear film with an activity determined at approximately 15 mU/mL, equivalent to the removal of head groups from phosphatidylcholine at a rate of approximately 15 µM/min. Conclusions: This work shows that phospholipid was not detected in any of the tested human tear samples (above the lower limits of detection as described) and suggests the presence of DAG in the tear film. DAG is known to be at low concentrations in MGS. These observations indicate that PLC may play a role in modulating the tear film phospholipid concentration.
Resumo:
Code division multiple access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particularly attractive as it provides robustness and flexibility in utilizing multiaccess channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble; here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. We demonstrate scenarios both in the large size limit and for finite systems in which the composite code has typical performance exceeding those of sparse and dense codes at equivalent signal to noise ratio.
Resumo:
We study soliton solutions of the path-averaged propagation equation governing the transmission of dispersion-managed (DM) optical pulses in the (practical) limit when residual dispersion and nonlinearity only slightly affect the pulse dynamics over one compensation period. In the case of small dispersion map strengths, the averaged pulse dynamics is governed by a perturbed form of the nonlinear Schrödinger equation; applying a perturbation theory – elsewhere developed – based on inverse scattering theory, we derive an analytic expression for the envelope of the DM soliton. This expression correctly predicts the power enhancement arising from the dispersion management. Theoretical results are verified by direct numerical simulations.
Resumo:
We study soliton solutions of the path-averaged propagation equation governing the transmission of dispersion-managed (DM) optical pulses in the (practical) limit when residual dispersion and nonlinearity only slightly affect the pulse dynamics over one compensation period. In the case of small dispersion map strengths, the averaged pulse dynamics is governed by a perturbed form of the nonlinear Schrödinger equation; applying a perturbation theory – elsewhere developed – based on inverse scattering theory, we derive an analytic expression for the envelope of the DM soliton. This expression correctly predicts the power enhancement arising from the dispersion management. Theoretical results are verified by direct numerical simulations.
Resumo:
A hybrid passive-active damping solution with improved system stability margin and enhanced dynamic performance is proposed for high power grid interactive converters. In grid connected active rectifier/inverter application, line side LCL filter improves the high frequency attenuation and makes the converter compatible with the stringent grid power quality regulations. Passive damping though offers a simple and reliable solution but it reduces overall converter efficiency. Active damping solutions do not increase the system losses but can guarantee the stable operation up to a certain speed of dynamic response which is limited by the maximum bandwidth of the current controller. This paper examines this limit and introduces a concept of hybrid passive-active damping solution with improved stability margin and high dynamic performance for line side LCL filter based active rectifier/inverter applications. A detailed design, analysis of the hybrid approach and trade-off between system losses and dynamic performance in grid connected applications are reported. Simulation and experimental results from a 10 kVA prototype demonstrate the effectiveness of the proposed solution. An analytical study on system stability and dynamic response with the variations of various controller and passive filter parameters is presented.
Resumo:
Cardiotocography provides significant information on foetal oxygenation linked to characteristics of foetal heart rate signals. Among most important we can mention foetal heart rate variability, whose spectral analysis is recognised like useful in improving diagnosis of pathologic conditions. However, despite its importance, a standardisation of definition and estimation of foetal heart rate variability is still searched. Some guidelines state that variability refers to fluctuations in the baseline free from accelerations and decelerations. This is an important limit in clinical routine since variability in correspondence of these FHR alterations has always been regarded as particularly significant in terms of prognostic value. In this work we compute foetal heart rate variability as difference between foetal heart rate and floatingline and we propose a method for extraction of floatingline which takes into account accelerations and decelerations. © 2011 Springer-Verlag Berlin Heidelberg.
Resumo:
Background: During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. Methods: We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. Results: 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Conclusions: Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.
Resumo:
MSC 2010: 26A33, 34D05, 37C25
Resumo:
2000 Mathematics Subject Classification: 35Q02, 35Q05, 35Q10, 35B40.
Resumo:
Preimplantation genetic diagnosis (PGD) following in vitro fertilization (IVF) offers couples at risk for transmitting genetic disorders the opportunity to identify affected embryos prior to replacement. In particular, embryo gender determination permits screening for X-linked diseases of unknown etiology. Analysis of embryos can be performed by polymerase chain reaction (PCR) amplification of material obtained by micromanipulation. This approach provides an alternative to the termination of an established pregnancy following chorionic villi sampling or amniocentesis. ^ Lately, the focus of preimplantation diagnosis and intervention has been shifting toward an attempt to correct cytoplasmic deficiencies. Accordingly, it is the aim of this investigation to develop methods to permit the examination of single cells or components thereof for clinical evaluation. In an attempt to lay the groundwork for precise therapeutic intervention for age related aneuploidy, transcripts encoding proteins believed to be involved in the proper segregation of chromosomes during human oocyte maturation were examined and quantified. Following fluorescent rapid cycle RT-PCR analysis it was determined that the concentration of cell cycle checkpoint gene transcripts decreases significantly as maternal age increases. Given the well established link between increasing maternal age and the incidence of aneuploidy, these results suggest that the degradation of these messages in aging oocytes may be involved with inappropriate chromosome separation during meiosis. ^ In order to investigate the cause of embryonic rescue observed following clinical cytoplasmic transfer procedures and with the objective of developing a diagnostic tool, mtDNA concentrations in polar bodies and subcellular components were evaluated. First, the typical concentration of mtDNA in human and mouse oocytes was determined by fluorescent rapid cycle PCR. Some disparity was noted between the copy numbers of individual cytoplasmic samples which may limit the use of the current methodology for the clinical assessment of the corresponding oocyte. ^
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. ^ Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. ^ The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures—cash in advance and documentary credit—have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.^
Resumo:
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures -- cash in advance and documentary credit -- have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.
Resumo:
The known moss flora of Terra Nova National Park, eastern Newfoundland, comp~ises 210 species. Eighty-two percent of the moss species occurring in Terra Nova are widespread or widespread-sporadic in Newfoundland. Other Newfoundland distributional elements present in the Terra Nova moss flora are the northwestern, southern, southeastern, and disjunct elements, but four of the mosses occurring in Terra Nova appear to belong to a previously unrecognized northeastern element of the Newfoundland flora. The majority (70.9%) of Terra Nova's mosses are of boreal affinity and are widely distributed in the North American coniferous forest belt. An additional 10.5 percent of the Terra Nova mosses are cosmopolitan while 9.5 percent are temperate and 4.8 percent are arctic-montane species. The remaining 4.3 percent of the mosses are of montane affinity, and disjunct between eastern and western North America. In Terra Nova, temperate species at their northern limit are concentrated in balsam fir stands, while arctic-montane species are restricted to exposed cliffs, scree slopes, and coastal exposures. Montane species are largely confined to exposed or freshwater habitats. Inability to tolerate high summer temperatures limits the distributions of both arctic-montane and montane species. In Terra Nova, species of differing phytogeographic affinities co-occur on cliffs and scree slopes. The microhabitat relationships of five selected species from such habitats were evaluated by Discriminant Functions Analysis and Multiple Regression Analysis. The five mosses have distinct and different microhabitats on cliffs and scree slopes in Terra Nova, and abundance of all but one is associated with variation in at least one microhabitat variable. Micro-distribution of Grimmia torquata, an arctic-montane species at its southern limit, appears to be deterJ]lined by sensitivity to high summer temperatures. Both southern mosses at their northern limit (Aulacomnium androgynum, Isothecium myosuroides) appear to be limited by water availability and, possibly, by low winter temperatures. The two species whose distributions extend both north and south or the study area (Encalypta procera, Eurhynchium pulchellum) show no clear relationship with microclimate. Dispersal factors have played a significant role in the development of the Terra Nova moss flora. Compared to the most likely colonizing source (i .e. the rest of the island of Newfoundland), species with small diaspores have colonized the study area to a proportionately much greater extent than have species with large diaspores. Hierarchical log-linear analysis indicates that this is so for all affinity groups present in Terra Nova. The apparent dispersal effects emphasize the comparatively recent glaciation of the area, and may also have been enhanced by anthropogenic influences. The restriction of some species to specific habitats, or to narrowly defined microhabitats, appears to strengthen selection for easily dispersed taxa.