965 resultados para objective techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To discuss the diagnosis and treatment of a patient with cubital tunnel syndrome and to illustrate novel treatment modalities for the ulnar nerve and its surrounding structures and target tissues. The rationale for the addition of nerve-gliding techniques will be highlighted. Clinical Features: Two months after onset, a 17-year-old female nursing student who had a traumatic onset of cubital tunnel syndrome still experienced pain around the elbow and paresthesia in the ulnar nerve distribution. Electrodiagnostic tests were negative. Segmental cervicothoracic motion dysfunctions were present which were regarded as contributing factors hindering natural recovery. Intervention and Outcomes: After 6 sessions consisting of nerve-gliding techniques and segmental joint manipulation and a home exercise program consisting of nerve gliding and light free-weight exercises, a substantial improvement was recorded on both the impairment and functional level (pain scales, clinical tests, and Northwick Park Questionnaire). Symptoms did not recur within a 10-month follow-up period, and pain and disability had completely resolved. Conclusions: Movement-based management may be beneficial in the conservative management of cubital tunnel syndrome. As this intervention is in contrast with the traditional recommendation of immobilization, comparing the effects of both interventions in a systematic way is an essential next step to determine the optimal treatment of patients with cubital tunnel syndrome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents load profiles of electricity customers, using the knowledge discovery in databases (KDD) procedure, a data mining technique, to determine the load profiles for different types of customers. In this paper, the current load profiling methods are compared using data mining techniques, by analysing and evaluating these classification techniques. The objective of this study is to determine the best load profiling methods and data mining techniques to classify, detect and predict non-technical losses in the distribution sector, due to faulty metering and billing errors, as well as to gather knowledge on customer behaviour and preferences so as to gain a competitive advantage in the deregulated market. This paper focuses mainly on the comparative analysis of the classification techniques selected; a forthcoming paper will focus on the detection and prediction methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work of thesis wants to present a dissertation of the wide range of modern dense matching algorithms, which are spreading in different application and research fields, with a particular attention to the innovative “Semi-Global” matching techniques. The choice of develop a semi-global numerical code was justified by the need of getting insight on the variables and strategies that affect the algorithm performances with the primary objective of maximizing the method accuracy and efficiency, and the results level of completeness. The dissertation will consist in the metrological characterization of the proprietary implementation of the semi-global matching algorithm, evaluating the influence of several matching variables and functions implemented in the process and comparing the accuracy and completeness of different results (digital surface models, disparity maps and 2D displacement fields) obtained using our code and other commercial and open-source matching programs in a wide variety of application fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distortion or deprivation of vision during an early `critical' period of visual development can result in permanent visual impairment which indicates the need to identify and treat visually at-risk individuals early. A significant difficulty in this respect is that conventional, subjective methods of visual acuity determination are ineffective before approximately three years of age. In laboratory studies, infant visual function has been quantified precisely, using objective methods based on visual evoked potentials (VEP), preferential looking (PL) and optokinetic nystagmus (OKN) but clinical assessment of infant vision has presented a particular difficulty. An initial aim of this study was to evaluate the relative clinical merits of the three techniques. Clinical derivatives were devised, the OKN method proved unsuitable but the PL and VEP methods were evaluated in a pilot study. Most infants participating in the study had known ocular and/or neurological abnormalities but a few normals were included for comparison. The study suggested that the PL method was more clinically appropriate for the objective assessment of infant acuity. A study of normal visual development from birth to one year was subsequently conducted. Observations included cycloplegic refraction, ophthalmoscopy and preferential looking visual acuity assessment using horizontally and vertically oriented square wave gratings. The aims of the work were to investigate the efficiency and sensitivity of the technique and to study possible correlates of visual development. The success rate of the PL method varied with age; 87% of newborns and 98% of infants attending follow-up successfully completed at least one acuity test. Below two months monocular acuities were difficult to secure; infants were most testable around six months. The results produced were similar to published data using the acuity card procedure and slightly lower than, but comparable with acuity data derived using extended PL methods. Acuity development was not impaired in infants found to have retinal haemorrhages as newborns. A significant relationship was found between newborn binocular acuity and anisometropia but not with other refractive findings. No strong or consistent correlations between grating acuity and refraction were found for three, six or twelve months olds. Improvements in acuity and decreases in levels of hyperopia over the first week of life were suggestive of recovery from minor birth trauma. The refractive data was analysed separately to investigate the natural history of refraction in normal infants. Most newborns (80%) were hyperopic, significant astigmatism was found in 86% and significant anisometropia in 22%. No significant alteration in spherical equivalent refraction was noted between birth and three months, a significant reduction in hyperopia was evident by six months and this trend continued until one year. Observations on the astigmatic component of the refractive error revealed a rather erratic series of changes which would be worthy of further investigation since a repeat refraction study suggested difficulties in obtaining stable measurements in newborns. Astigmatism tended to decrease between birth and three months, increased significantly from three to six months and decreased significantly from six to twelve months. A constant decrease in the degree of anisometropia was evident throughout the first year. These findings have implications for the correction of infantile refractive error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The starting point of this research was the belief that manufacturing and similar industries need help with the concept of e-business, especially in assessing the relevance of possible e-business initiatives. The research hypotheses was that it should be possible to produce a systematic model that defines, at a useful level of detail, the probable e-business requirements of an organisation based on objective criteria with an accuracy of 85%-90%. This thesis describes the development and validation of such a model. A preliminary model was developed from a variety of sources, including a survey of current and planned e-business activity and representative examples of e-business material produced by e-business solution providers. The model was subject to a process of testing and refinement based on recursive case studies, with controls over the improving accuracy and stability of the model. Useful conclusions were also possible as to the relevance of e-business functions to the case study participants themselves. Techniques were evolved to synthesise the e-business requirements of an organisation and present them at a management summary level of detail. The results of applying these techniques to all the case studies used in this research were discussed. The conclusion of the research was that the case study methodology employed was successful. A model was achieved suitable for practical application in a manufacturing organisation requiring help with a requirements definition process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Premium Intraocular Lenses (IOLs) such as toric IOLs, multifocal IOLs (MIOLs) and accommodating IOLs (AIOLs) can provide better refractive and visual outcomes compared to standard monofocal designs, leading to greater levels of post-operative spectacle independence. The principal theme of this thesis relates to the development of new assessment techniques that can help to improve future premium IOL design. IOLs designed to correct astigmatism form the focus of the first part of the thesis. A novel toric IOL design was devised to decrease the effect of toric rotation on patient visual acuity, but found to have neither a beneficial or detrimental impact on visual acuity retention. IOL tilt, like rotation, may curtail visual performance; however current IOL tilt measurement techniques require the use of specialist equipment not readily available in most ophthalmological clinics. Thus a new idea that applied Pythagoras’s theory to digital images of IOL optic symmetricality in order to calculate tilt was proposed, and shown to be both accurate and highly repeatable. A literature review revealed little information on the relationship between IOL tilt, decentration and rotation and so this was examined. A poor correlation between these factors was found, indicating they occur independently of each other. Next, presbyopia correcting IOLs were investigated. The light distribution of different MIOLs and an AIOL was assessed using perimetry, to establish whether this could be used to inform optimal IOL design. Anticipated differences in threshold sensitivity between IOLs were not however found, thus perimetry was concluded to be ineffective in mapping retinal projection of blur. The observed difference between subjective and objective measures of accommodation, arising from the influence of pseudoaccommodative factors, was explored next to establish how much additional objective power would be required to restore the eye’s focus with AIOLs. Blur tolerance was found to be the key contributor to the ocular depth of focus, with an approximate dioptric influence of 0.60D. Our understanding of MIOLs may be limited by the need for subjective defocus curves, which are lengthy and do not permit important additional measures to be undertaken. The use of aberrometry to provide faster objective defocus curves was examined. Although subjective and objective measures related well, the peaks of the MIOL defocus curve profile were not evident with objective prediction of acuity, indicating a need for further refinement of visual quality metrics based on ocular aberrations. The experiments detailed in the thesis evaluate methods to improve visual performance with toric IOLs. They also investigate new techniques to allow more rapid post-operative assessment of premium IOLs, which could allow greater insights to be obtained into several aspects of visual quality, in order to optimise future IOL design and ultimately enhance patient satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The section of CN railway between Vancouver and Kamloops runs along the base of many hazardous slopes, including the White Canyon, which is located just outside the town of Lytton, BC. The slope has a history of frequent rockfall activity, which presents a hazard to the railway below. Rockfall inventories can be used to understand the frequency-magnitude relationship of events on hazardous slopes, however it can be difficult to consistently and accurately identify rockfall source zones and volumes on large slopes with frequent activity, leaving many inventories incomplete. We have studied this slope as a part of the Canadian Railway Ground Hazard Research Program and have collected remote sensing data, including terrestrial laser scanning (TLS), photographs, and photogrammetry data since 2012, and used change detection to identify rockfalls on the slope. The objective of this thesis is to use a subset of this data to understand how rockfalls identified from TLS data could be used to understand the frequency-magnitude relationship of rockfalls on the slope. This includes incorporating both new and existing methods to develop a semi-automated workflow to extract rockfall events from the TLS data. We show that these methods can be used to identify events as small as 0.01 m3 and that the duration between scans can have an effect on the frequency-magnitude relationship of the rockfalls. We also show that by incorporating photogrammetry data into our analysis, we can create a 3D geological model of the slope and use this to classify rockfalls by lithology, to further understand the rockfall failure patterns. When relating the rockfall activity to triggering factors, we found that the amount of precipitation occurring over the winter has an effect on the overall rockfall frequency for the remainder of the year. These results can provide the railways with a more complete inventory of events compared to records created through track inspection, or rockfall monitoring systems that are installed on the slope. In addition, we can use the database to understand the spatial and temporal distribution of events. The results can also be used as an input to rockfall modelling programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contexte: La césarienne est une procédure chirurgicale qui survient dans plus du quart des accouchements en Amérique du Nord. Les techniques chirurgicales de fermeture de l’utérus lors de la césarienne sont variées, influencent la cicatrisation et le risque de complications chez la femme à court et long terme. Il a été suggéré que la fermeture en un plan barré augmentait le risque de rupture de l’utérus et de défaut de cicatrisation de l’utérus. Cependant, en l’absence d’un haut niveau d’évidence, cette technique est toujours pratiquée au Canada et en Amérique du Nord. Objectif: Comparer l’impact des différentes techniques de fermeture de l’utérus lors de la césarienne sur les complications maternelles à court et long terme. Méthode : Trois revues systématiques et méta-analyses d’études observationnelles ou d’essais randomisés contrôlés (ECR) ont été réalisées. La prévalence des défauts de cicatrisation et les issues à court et long terme ont été comparées entre les techniques de fermeture de l’utérus. Par la suite, un essai randomisé contrôlé a évalué trois techniques de fermeture de l’utérus : un plan barré, deux plans barrés et deux plans non barrés excluant la déciduale, chez 81 femmes avec une césarienne primaire élective à ≥ 38 semaines de grossesse. L’épaisseur du myomètre résiduel a été mesurée six mois après la césarienne à l’aide d’une échographie transvaginale et comparée par un test t de Student. Résultats : Les résultats des revues systématiques et méta-analyses ont montré que 37% à 59% des femmes présentaient un défaut de cicatrisation de l’utérus après leur césarienne. Concernant les complications à court terme, les types de fermeture de l’utérus étudiés sont comparables, à l’exception de la fermeture en un plan barré qui est associée à un temps opératoire plus court que celle en deux plans (-6.1 minutes, 95% intervalle de confiance (IC) -8.7 à -3.4, p<0.001). Les fermetures de l’utérus en un plan barré sont associées à plus de risque de rupture utérine qu’une fermeture en deux plans barrés (rapport de cote 4.96; IC 95%: 2.58–9.52, P< 0.001). L’ECR a également démontré que la fermeture de l’utérus en un plan barré était associée à une épaisseur du myomètre résiduel plus mince que la fermeture en deux plans non barrés excluant la déciduale (3.8 ± 1.6 mm vs 6.1 ± 2.2 mm; p< 0.001). Finalement, aucune différence significative n’a été détectée concernant la fréquence des points d’hémostases entre les techniques (p=1.000). Conclusion : Lors d’une césarienne élective primaire à terme, une fermeture en deux plans non barrés est associée à un myomètre plus épais qu’une fermeture en un plan barré, sans augmenter le recours à des points d’hémostase. De plus, il est suggéré que la fermeture en deux plans réduirait le risque de rupture utérine lors d’une prochaine grossesse. Finalement, la fermeture chez les femmes en travail doit être plus étudiée.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transdermal drug delivery has recently received increasing attention in the face of growing challenges to deliver peptide and protein drugs. Controlled transdermal delivery is an important route for the delivery of peptides and proteins that can maintain the therapeutic effectiveness of the drug by minimizing enzymatic degradation which is a major concern in other noninvasive routes of delivery such as the oral route. Although the advantages of transdermal delivery are very desirable, the natural obstacle to drug entry imposed by the skin's barrier function makes it one of the most difficult route of administration. Iontophoresis and electroporation have been reported to be useful as permeation enhancing techniques in the transdermal delivery of protein and peptide drugs. The objective of present study is to use the above enhancement techniques to deliver cyclosporin A (CSA) to treat psoriasis. The in vitro experiments were performed using hairless rat skin as the model with Franz diffusion cells for iontophoresis and custom made diffusion cells for electroporation. The donor drug solution of CSA consisted of an aqueous solution of CSA - polymer solid dispersion, coevaporate, and/or a hydroethanolic solution of CSA PBS was used as the receiver solution. ³H labelled CSA and ¹⁴C labelled ethanol were used to facilitate analysis using a liquid scintillation counter. The control experiment consisted of passive diffusion study. Silver/silver chloride electrodes were used in all studies. In the iontophoresis experiments a constant DC current (0.5 mA/cm²) was used. In the electroporation experiments different delivery parameters were studied: (1) applied electrode voltage (Uelectrode), (2) decay time constant (τ), (3) the number of pulses delivered - single or multiple, and { 4) the time of diffusive contact with drug after electroporation ('contact duration'). Compared to the passive diffusion, iontophoresis did not result in a significant increase in the amount of CSA delivered transdermally with both the CSA-polymer donor and hydroethanolic drug solutions. With the use of electroporation there was a significant increase in the transdermal delivery, compared to passive transport. With the CSA-polymer coevaporate donor solution the increase in delivery was only about 6 fold higher whereas with the hydroethanolic solution the increase was about 60 times higher compared to passive diffusion. The 'contact duration• was an important fader and a 4-hour 'contact duration' was found to be the optimum time period required for effective transdermal delivery. Use of single pulse (τ=5.6 ms) electroporation resulted in a significant increase {p<0.05) in the delivery of CSA in skin {CSA.n) and EtOH in receiver (EtOHreceiver). With multiple pulse (τ=10 ms. 25 pulses) the increase in CSAskin was more pronounced with a 60 fold increase than compared to the passive delivery. However there was no significant increase in the other two quantities viz. CSAreceiver, and EtCHreceiver.