919 resultados para Diagnostic Algorithm Development


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background. Hhereditary cystic kidney diseases are a heterogeneous spectrum of disorders leading to renal failure. Clinical features and family history can help to distinguish the recessive from dominant diseases but the differential diagnosis is difficult due the phenotypic overlap. The molecular diagnosis is often the only way to characterize the different forms. A conventional molecular screening is suitable for small genes but is expensive and time-consuming for large size genes. Next Generation Sequencing (NGS) technologies enables massively parallel sequencing of nucleic acid fragments. Purpose. The first purpose was to validate a diagnostic algorithm useful to drive the genetic screening. The second aim was to validate a NGS protocol of PKHD1 gene. Methods. DNAs from 50 patients were submitted to conventional screening of NPHP1, NPHP5, UMOD, REN and HNF1B genes. 5 patients with known mutations in PKHD1 were submitted to NGS to validate the new method and a not genotyped proband with his parents were analyzed for a diagnostic application. Results. The conventional molecular screening detected 8 mutations: 1) the novel p.E48K of REN in a patient with cystic nephropathy, hyperuricemia, hyperkalemia and anemia; 2) p.R489X of NPHP5 in a patient with Senior Loken Syndrome; 3) pR295C of HNF1B in a patient with renal failure and diabetes.; 4) the NPHP1 deletion in 3 patients with medullar cysts; 5) the HNF1B deletion in a patient with medullar cysts and renal hypoplasia and in a diabetic patient with liver disease. The NGS of PKHD1 detected all known mutations and two additional variants during the validation. The diagnostic NGS analysis identified the patient’s compound heterozygosity with a maternal frameshift mutation and a paternal missense mutation besides a not transmitted paternal missense mutation. Conclusions. The results confirm the validity of our diagnostic algorithm and suggest the possibility to introduce this NGS protocol to clinical practice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Despite initial concerns about the sensitivity of the proposed diagnostic criteria for DSM-5 Autism Spectrum Disorder (ASD; e.g. Gibbs et al., 2012; McPartland et al., 2012), evidence is growing that the DSM-5 criteria provides an inclusive description with both good sensitivity and specificity (e.g. Frazier et al., 2012; Kent, Carrington et al., 2013). The capacity of the criteria to provide high levels of sensitivity and specificity comparable with DSM-IV-TR however relies on careful measurement to ensure that appropriate items from diagnostic instruments map onto the new DSM-5 descriptions.Objectives: To use an existing DSM-5 diagnostic algorithm (Kent, Carrington et .al., 2013) to identify a set of ‘essential’ behaviors sufficient to make a reliable and accurate diagnosis of DSM-5 Autism Spectrum Disorder (ASD) across age and ability level. Methods: Specific behaviors were identified and tested from the recently published DSM-5 algorithm for the Diagnostic Interview for Social and Communication Disorders (DISCO). Analyses were run on existing DISCO datasets, with a total participant sample size of 335. Three studies provided step-by-step development towards identification of a minimum set of items. Study 1 identified the most highly discriminating items (p<.0001). Study 2 used a lower selection threshold than in Study 1 (p<.05) to facilitate better representation of the full DSM-5 ASD profile. Study 3 included additional items previously reported as significantly more frequent in individuals with higher ability. The discriminant validity of all three item sets was tested using Receiver Operating Characteristic curves. Finally, sensitivity across age and ability was investigated in a subset of individuals with ASD (n=190).Results: Study 1 identified an item set (14 items) with good discriminant validity, but which predominantly measured social-communication behaviors (11/14). The Study 2 item set (48 items) better represented the DSM-5 ASD and had good discriminant validity, but the item set lacked sensitivity for individuals with higher ability. The final Study 3 adjusted item set (54 items) improved sensitivity for individuals with higher ability and performance and was comparable to the published DISCO DSM-5 algorithm.Conclusions: This work represents a first attempt to derive a reduced set of behaviors for DSM-5 directly from an existing standardized ASD developmental history interview. Further work involving existing ASD diagnostic tools with community-based and well characterized research samples will be required to replicate these findings and exploit their potential to contribute to a more efficient and focused ASD diagnostic process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Path planning and trajectory design for autonomous underwater vehicles (AUVs) is of great importance to the oceanographic research community because automated data collection is becoming more prevalent. Intelligent planning is required to maneuver a vehicle to high-valued locations to perform data collection. In this paper, we present algorithms that determine paths for AUVs to track evolving features of interest in the ocean by considering the output of predictive ocean models. While traversing the computed path, the vehicle provides near-real-time, in situ measurements back to the model, with the intent to increase the skill of future predictions in the local region. The results presented here extend prelim- inary developments of the path planning portion of an end-to-end autonomous prediction and tasking system for aquatic, mobile sensor networks. This extension is the incorporation of multiple vehicles to track the centroid and the boundary of the extent of a feature of interest. Similar algorithms to those presented here are under development to consider additional locations for multiple types of features. The primary focus here is on algorithm development utilizing model predictions to assist in solving the motion planning problem of steering an AUV to high-valued locations, with respect to the data desired. We discuss the design technique to generate the paths, present simulation results and provide experimental data from field deployments for tracking dynamic features by use of an AUV in the Southern California coastal ocean.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chlamydia pecorum is globally associated with several ovine diseases including keratoconjunctivitis and polyarthritis. The exact relationship between the variety of C. pecorum strains reported and the diseases described in sheep remains unclear, challenging efforts to accurately diagnose and manage infected flocks. In the present study, we applied C. pecorum multi-locus sequence typing (MLST) to C. pecorum positive samples collected from sympatric flocks of Australian sheep presenting with conjunctivitis, conjunctivitis with polyarthritis, or polyarthritis only and with no clinical disease (NCD) in order to elucidate the exact relationships between the infecting strains and the range of diseases. Using Bayesian phylogenetic and cluster analyses on 62 C. pecorum positive ocular, vaginal and rectal swab samples from sheep presenting with a range of diseases and in a comparison to C. pecorum sequence types (STs) from other hosts, one ST (ST 23) was recognised as a globally distributed strain associated with ovine and bovine diseases such as polyarthritis and encephalomyelitis. A second ST (ST 69) presently only described in Australian animals, was detected in association with ovine as well as koala chlamydial infections. The majority of vaginal and rectal C. pecorum STs from animals with NCD and/or anatomical sites with no clinical signs of disease in diseased animals, clustered together in a separate group, by both analyses. Furthermore, 8/13 detected STs were novel. This study provides a platform for strain selection for further research into the pathogenic potential of C. pecorum in animals and highlights targets for potential strain-specific diagnostic test development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To evaluate the practice of laparoscopic appendectomy (LA) in Italy. Methods: On behalf of the Italian Society of Young Surgeons (SPIGC), an audit of LA was carried out through a written questionnaire sent to 800 institutions in Italy. The questions concerned the diffusion of laparoscopic surgery and LA over the period 1990 through 2001, surgery-related morbidity and mortality rates, indications for LA, the diagnostic algorithm adopted prior to surgery, and use of LA among young surgeons (<40 years). Results: A total of 182 institutions (22.7%) participated in the current audit, and accounted for a total number of 26863 LA. Laparoscopic surgery is performed in 173 (95%) institutions, with 144 (83.2%) routinely performing LA. The mean interval from introduction of laparoscopic surgery to inception of LA was 3.4 ± 2.5 years. There was an emergent basis for 8809 (32.8%) LA procedures (<6 hours of admission); 10314 (38.4%) procedures were performed on an urgent basis (<24 hours of admission); while 7740 (28.8%) procedures were elective. The conversion rate was 2.1% (561 cases) and was due to intraoperative complications in 197 cases (35.1%). Intraoperative complications ranged as high as 0.32%, while postoperative complications were reported in 1.2% of successfully completed LA. The mean hospital stay for successfully completed LA was 2.5 ± 1.05 days. The highest rate of intraoperative complications was reported as occurring during the learning curve phase of their experience (in their first 10 procedures) by 39.7% of the surgeons. LA was indicated for every case of suspected acute appendiceal disease by 51.8% of surgeons, and 44.8% order abdominal ultrasound (US) prior to surgery. A gynecologic counseling is deemed necessary only by 34.5% surgeons, while an abdominal CT scan is required only by 1.5%. The procedure is completed laparoscopically in the absence of gross appendiceal inflammation by 83%; 79.8% try to complete the procedure laparoscopically in the presence of concomitant disease; while 10.4% convert to open surgery in cases of suspected malignancy. Of responding surgeons aged under 40, 76.3% can perform LA, compared to 47.3% surgeons of all age categories. Conclusions: The low response rate of the present survey does not allow us to assess the diffusion of LA in Italy, but rather to appraise its practice in centers routinely performing laparoscopic surgery. In the hands of experienced surgeons, LA has morbidity rates comparable to those of international series. The higher diagnostic yield of laparoscopy makes it an invaluable tool in the management algorithm of women of childbearing age; its advantages in the presence of severe peritonitis are less clear-cut. Surgeons remain the main limiting factor preventing a wider diffusion of LA in our country, since only 47.3% of surgeons from the audited institutions can perform LA on a routine basis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational Fluid Dynamics (CFD) is gradually becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However the mathematical modelling of the erratic turbulent motion remains the key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt changes in the turbulent energy and other parameters situated at near wall regions a particularly fine mesh is necessary which inevitably increases the computer storage and run-time requirements. Turbulence modelling can be considered to be one of the three key elements in CFD. Precise mathematical theories have evolved for the other two key elements, grid generation and algorithm development. The principal objective of turbulence modelling is to enhance computational procedures of efficient accuracy to reproduce the main structures of three dimensional fluid flows. The flow within an electronic system can be characterized as being in a transitional state due to the low velocities and relatively small dimensions encountered. This paper presents simulated CFD results for an investigation into the predictive capability of turbulence models when considering both fluid flow and heat transfer phenomena. Also a new two-layer hybrid kε / kl turbulence model for electronic application areas will be presented which holds the advantages of being cheap in terms of the computational mesh required and is also economical with regards to run-time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situRrs as input to the models, the performance of eleven semi-analytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To assess the impedance cardiogram recorded by an automated external defibrillator during cardiac arrest to facilitate emergency care by lay persons. Lay persons are poor at emergency pulse checks (sensitivity 84%, specificity 36%); guidelines recommend they should not be performed. The impedance cardiogram (dZ/dt) is used to indicate stroke volume. Can an impedance cardiogram algorithm in a defibrillator determine rapidly circulatory arrest and facilitate prompt initiation of external cardiac massage?

DESIGN: Clinical study.

SETTING: University hospital.

PATIENTS: Phase 1 patients attended for myocardial perfusion imaging. Phase 2 patients were recruited during cardiac arrest. This group included nonarrest controls.

INTERVENTIONS: The impedance cardiogram was recorded through defibrillator/electrocardiographic pads oriented in the standard cardiac arrest position.

MEASUREMENTS AND MAIN RESULTS: Phase 1: Stroke volumes from gated myocardial perfusion imaging scans were correlated with parameters from the impedance cardiogram system (dZ/dt(max) and the peak amplitude of the Fast Fourier Transform of dZ/dt between 1.5 Hz and 4.5 Hz). Multivariate analysis was performed to fit stroke volumes from gated myocardial perfusion imaging scans with linear and quadratic terms for dZ/dt(max) and the Fast Fourier Transform to identify significant parameters for incorporation into a cardiac arrest diagnostic algorithm. The square of the peak amplitude of the Fast Fourier Transform of dZ/dt was the best predictor of reduction in stroke volumes from gated myocardial perfusion imaging scans (range = 33-85 mL; p = .016). Having established that the two pad impedance cardiogram system could detect differences in stroke volumes from gated myocardial perfusion imaging scans, we assessed its performance in diagnosing cardiac arrest. Phase 2: The impedance cardiogram was recorded in 132 "cardiac arrest" patients (53 training, 79 validation) and 97 controls (47 training, 50 validation): the diagnostic algorithm indicated cardiac arrest with sensitivities and specificities (+/- exact 95% confidence intervals) of 89.1% (85.4-92.1) and 99.6% (99.4-99.7; training) and 81.1% (77.6-84.3) and 97% (96.7-97.4; validation).

CONCLUSIONS: The impedance cardiogram algorithm is a significant marker of circulatory collapse. Automated defibrillators with an integrated impedance cardiogram could improve emergency care by lay persons, enabling rapid and appropriate initiation of external cardiac massage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An algorithm based only on the impedance cardiogram (ICG) recorded through two defibrillation pads, using the strongest frequency component and amplitude, incorporated into a defibrillator could determine circulatory arrest and reduce delays in starting cardiopulmonary resuscitation (CPR). Frequency analysis of the ICG signal is carried out by integer filters on a sample by sample basis. They are simpler, lighter and more versatile when compared to the FFT. This alternative approach, although less accurate, is preferred due to the limited processing capacity of devices that could compromise real time usability of the FFT. These two techniques were compared across a data set comprising 13 cases of cardiac arrest and 6 normal controls. The best filters were refined on this training set and an algorithm for the detection of cardiac arrest was trained on a wider data set. The algorithm was finally tested on a validation set. The ICG was recorded in 132 cardiac arrest patients (53 training, 79 validation) and 97 controls (47 training, 50 validation): the diagnostic algorithm indicated cardiac arrest with a sensitivity of 81.1% (77.6-84.3) and specificity of 97.1% (96.7-97.4) for the validation set (95% confidence intervals). Automated defibrillators with integrated ICG analysis have the potential to improve emergency care by lay persons enabling more rapid and appropriate initiation of CPR and when combined with ECG analysis they could improve on the detection of cardiac arrest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The overall attempt of the study was aimed to understand the microphytoplankton community composition and its variations along a highly complex and dynamic marine ecosystem, the northern Arabian Sea. The data generated provides a first of its kind knowledge on the major primary producers of the region. There appears significant response among the microphytoplankton community structure towards the variations in the hydrographic conditions during the winter monsoon period. Interannually, variations were observed within the microphytoplankton community associated with the variability in temperature patterns and the intensity of convective mixing. Changing bloom pattern and dominating species among the phytoplankton community open new frontiers and vistas towards more intense study on the biological responses towards physical processes. The production of large amount of organic matter as a result of intense blooming of Noctiluca as well as diatoms aggregations augment the particulate organic substances in these ecosystem. This definitely influences the carbon dynamics of the northern Arabian Sea. Detailed investigations based on time series as well as trophodynamic studies are necessary to elucidate the carbon flux and associated impacts of winter-spring blooms in NEAS. Arabian sea is considered as one among the hotspot for carbon dynamics and the pioneering records on the major primary producers fuels carbon based export production studies and provides a platform for future research. Moreover upcoming researches based on satellite based remote sensing on productivity patterns utilizes these insitu observations and taxonomic data sets of phytoplankton for validation of bloom specific algorithm development and its implementation. Furthermore Saurashtra coast is considered as a major fishing zone of Indian EEZ. The studies on the phytoplankton in these regions provide valuable raw data for fishery prediction models and identifying fishing zones. With the Summary and Conclusion 177 baseline data obtained further trophodynamic studies can be initiated in the complex productive North Eastern Arabian Seas (NEAS) ecosystem that is still remaining unexplored.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objetivo: Describir los factores relacionados con la toma de decisión de manejo quirúrgico en pacientes con hidronefrosis secundaria a estrechez de la unión pieloureteral en el servicio de Urología Pediátrica de una institución de IV nivel. Materiales y Métodos: Se realizó un estudio descriptivo retrospectivo. Se seleccionaron por conveniencia a 100 pacientes con diagnóstico antenatal de hidronefrosis, 37 fueron llevados a manejo quirúrgico por estrechez de la unión pieloureteral (EUPU) entre los años 2009 y 2012. Se evaluaron los factores que llevaron a la toma| de esta decisión. Resultados: Los pacientes con diagnóstico postnatal de EUPU representaron el 37% de la población, la indicación de manejo quirúrgico en 13 pacientes (35,1%) fue dilatación caliceal (SFU 3), en 21 pacientes (56,8%) de deterioro de la función renal y en los restantes (8,1%) infección urinaria recurrente. Se encontró una progresión de 30% en la severidad de la dilatación en el periodo postnatal, habían 9 pacientes (24% de la muestra) SFU de 3 y 4 en el periodo prenatal y 20 (54%) en el periodo postnatal que fueron llevados a manejo quirúrgico. De los pacientes que disponíamos de datos precisos de valores de variación porcentual de gammagrafía 16% de la muestra, se encontró que había una variación del 50% en deterioro de la función renal. Conclusión: En el grupo de pacientes colombianos de la consulta externa del servicio de urología pediátrica estudiado se encontró que la decisión de manejo quirúrgico en pacientes con EUPU, está en concordancia con lo encontrado en la literatura mundial, siendo estos la presencia de dilatación caliceal deterioro de la función renal en gammagrafía DMSA.