978 resultados para null distribution analysis
Resumo:
In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.
Resumo:
Laser diffraction (LD) and static image analysis (SIA) of rectangular particles [United States Pharmacopeia, USP30-NF25, General Chapter <776>, Optical Miroscopy.] have been systematically studied. To rule out sample dispersion and particle orientation as the root cause of differences in size distribution profiles, we immobilize powder samples on a glass plate by means of a dry disperser. For a defined region of the glass plate, we measure the diffraction pattern as induced by the dispersed particles, and the 2D dimensions of the individual particles using LD and optical microscopy, respectively. We demonstrate a correlation between LD and SIA, with the scattering intensity of the individual particles as the dominant factor. In theory, the scattering intensity is related to the square of the projected area of both spherical and rectangular particles. In traditional LD the size distribution profile is dominated by the maximum projected area of the particles (A). The diffraction diameters of a rectangular particle with length L and breadth B as measured by the LD instrument approximately correspond to spheres of diameter ØL and ØB respectively. Differences in the scattering intensity between spherical and rectangular particles suggest that the contribution made to the overall LD volume probability distribution by each rectangular particle is proportional to A2/L and A2/B. Accordingly, for rectangular particles the scattering intensity weighted diffraction diameter (SIWDD) explains an overestimation of their shortest dimension and an underestimation of their longest dimension. This study analyzes various samples of particles whose length ranges from approximately 10 to 1000 μm. The correlation we demonstrate between LD and SIA can be used to improve validation of LD methods based on SIA data for a variety of pharmaceutical powders all with a different rectangular particle size and shape.
Power Electronic Converters in Low-Voltage Direct Current Distribution – Analysis and Implementation
Resumo:
Over the recent years, smart grids have received great public attention. Many proposed functionalities rely on power electronics, which play a key role in the smart grid, together with the communication network. However, “smartness” is not the driver that alone motivates the research towards distribution networks based on power electronics; the network vulnerability to natural hazards has resulted in tightening requirements for the supply security, set both by electricity end-users and authorities. Because of the favorable price development and advancements in the field, direct current (DC) distribution has become an attractive alternative for distribution networks. In this doctoral dissertation, power electronic converters for a low-voltage DC (LVDC) distribution system are investigated. These include the rectifier located at the beginning of the LVDC network and the customer-end inverter (CEI) on the customer premises. Rectifier topologies are introduced, and according to the LVDC system requirements, topologies are chosen for the analysis. Similarly, suitable CEI topologies are addressed and selected for study. Application of power electronics into electricity distribution poses some new challenges. Because the electricity end-user is supplied with the CEI, it is responsible for the end-user voltage quality, but it also has to be able to supply adequate current in all operating conditions, including a short-circuit, to ensure the electrical safety. Supplying short-circuit current with power electronics requires additional measures, and therefore, the short-circuit behavior is described and methods to overcome the high-current supply to the fault are proposed. Power electronic converters also produce common-mode (CM) and radio-frequency (RF) electromagnetic interferences (EMI), which are not present in AC distribution. Hence, their magnitudes are investigated. To enable comprehensive research on the LVDC distribution field, a research site was built into a public low-voltage distribution network. The implementation was a joint task by the LVDC research team of Lappeenranta University of Technology and a power company Suur-Savon S¨ahk¨o Oy. Now, the measurements could be conducted in an actual environment. This is important especially for the EMI studies. The main results of the work concern the short-circuit operation of the CEI and the EMI issues. The applicability of the power electronic converters to electricity distribution is demonstrated, and suggestions for future research are proposed.
Resumo:
MS-based proteomic methods were utilised for the first time in the discovery of novel penile cancer biomarkers. MALDI MS imaging was used to obtain the in situ biomolecular MS profile of squamous cell carcinoma of the penis which was then compared to benign epithelial MS profiles. Spectra from cancerous and benign tissue areas were examined to identify MS peaks that best distinguished normal epithelial cells from invasive squamous epithelial cells, providing crucial evidence to suggest S100A4 to be differentially expressed. Verification by immunohistochemistry resulted in positive staining for S100A4 in a sub-population of invasive but not benign epithelial cells.
Resumo:
To assess the quality of school education, much of educational research is concerned with comparisons of test scores means or medians. In this paper, we shift this focus and explore test scores data by addressing some often neglected questions. In the case of Brazil, the mean of test scores in Math for students of the fourth grade has declined approximately 0,2 standard deviation in the late 1990s. But what about changes in the distribution of scores? It is unclear whether the decline was caused by deterioration in student performance in upper and/or lower tails of the distribution. To answer this question, we propose the use of the relative distribution method developed by Handcock and Morris (1999). The advantage of this methodology is that it compares two distributions of test scores data through a single distribution and synthesizes all the differences between them. Moreover, it is possible to decompose the total difference between two distributions in a level effect (changes in median) and shape effect (changes in shape of the distribution). We find that the decline of average-test scores is mainly caused by a worsening in the position of all students throughout the distribution of scores and is not only specific to any quantile of distribution.
Resumo:
A methodology, fluorescence-intensity distribution analysis, has been developed for confocal microscopy studies in which the fluorescence intensity of a sample with a heterogeneous brightness profile is monitored. An adjustable formula, modeling the spatial brightness distribution, and the technique of generating functions for calculation of theoretical photon count number distributions serve as the two cornerstones of the methodology. The method permits the simultaneous determination of concentrations and specific brightness values of a number of individual fluorescent species in solution. Accordingly, we present an extremely sensitive tool to monitor the interaction of fluorescently labeled molecules or other microparticles with their respective biological counterparts that should find a wide application in life sciences, medicine, and drug discovery. Its potential is demonstrated by studying the hybridization of 5′-(6-carboxytetramethylrhodamine)-labeled and nonlabeled complementary oligonucleotides and the subsequent cleavage of the DNA hybrids by restriction enzymes.
Resumo:
Null dereferencing is one of the most frequent bugs in Java systems causing programs to crash due to the uncaught NullPointerException. Developers often fix this bug by introducing a guard (i.e., null check) on the potentially-null objects before using them. In this paper we investigate the null checks in 717 open-source Java systems to understand when and why developers introduce null checks. We find that 35 of the if-statements are null checks. A deeper investigation shows that 71 of the checked-for-null objects are returned from method calls. This indicates that null checks have a serious impact on performance and that developers introduce null checks when they use methods that return null.
Resumo:
Gravity cores obtained from isolated seamounts located within, and rising up to 300 m from the sediment-filled Peru-Chile Trench off Southern Central Chile (36°S-39°S) contain numerous turbidite layers which are much coarser than the hemipelagic background sedimentation. The mineralogical composition of some of the beds indicates a mixed origin from various source terrains while the faunal assemblage of benthic foraminifera in one of the turbidite layers shows a mixed origin from upper shelfal to middle-lower bathyal depths which could indicate a multi-source origin and therefore indicate an earthquake triggering of the causing turbidity currents. The bathymetric setting and the grain size distribution of the sampled layers, together with swath echosounder and sediment echosounder data which monitor the distribution of turbidites on the elevated Nazca Plate allow some estimates on the flow direction, flow velocity and height of the causing turbidity currents. We discuss two alternative models of deposition, both of which imply high (175-450 m) turbidity currents and we suggest a channelized transport process as the general mode of turbidite deposition. Whether these turbidites are suspension fallout products of thick turbiditic flows or bedload deposits from sheet-like turbidity currents overwhelming elevated structures cannot be decided upon using our sedimentological data, but the specific morphology of the seamounts rather argues for the first option. Oxygen isotope stratigraphy of one of the cores indicates that the turbiditic sequences were deposited during the last Glacial period and during the following transition period and turbiditic deposition stopped during the Holocene. This climatic coupling seems to be dominant, while the occurrence of megathrust earthquakes provides a trigger mechanism. This seismic triggering takes effect only during times of very high sediment supply to the shelf and slope.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The application of nonlocal density functional theory (NLDFT) to determine pore size distribution (PSD) of activated carbons using a nongraphitized carbon black, instead of graphitized thermal carbon black, as a reference system is explored. We show that in this case nitrogen and argon adsorption isotherms in activated carbons are precisely correlated by the theory, and such an excellent correlation would never be possible if the pore wall surface was assumed to be identical to that of graphitized carbon black. It suggests that pore wall surfaces of activated carbon are closer to that of amorphous solids because of defects of crystalline lattice, finite pore length, and the presence of active centers.. etc. Application of the NLDFT adapted to amorphous solids resulted in quantitative description of N-2 and Ar adsorption isotherms on nongraphitized carbon black BP280 at their respective boiling points. In the present paper we determined solid-fluid potentials from experimental adsorption isotherms on nongraphitized carbon black and subsequently used those potentials to model adsorption in slit pores and generate a corresponding set of local isotherms, which we used to determine the PSD functions of different activated carbons. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Photovoltaic (PV) solar power generation is proven to be effective and sustainable but is currently hampered by relatively high costs and low conversion efficiency. This paper addresses both issues by presenting a low-cost and efficient temperature distribution analysis for identifying PV module mismatch faults by thermography. Mismatch faults reduce the power output and cause potential damage to PV cells. This paper first defines three fault categories in terms of fault levels, which lead to different terminal characteristics of the PV modules. The investigation of three faults is also conducted analytically and experimentally, and maintenance suggestions are also provided for different fault types. The proposed methodology is developed to combine the electrical and thermal characteristics of PV cells subjected to different fault mechanisms through simulation and experimental tests. Furthermore, the fault diagnosis method can be incorporated into the maximum power point tracking schemes to shift the operating point of the PV string. The developed technology has improved over the existing ones in locating the faulty cell by a thermal camera, providing a remedial measure, and maximizing the power output under faulty conditions.
Resumo:
Two years of harmonized aerosol number size distribution data from 24 European field monitoring sites have been analysed. The results give a comprehensive overview of the European near surface aerosol particle number concentrations and number size distributions between 30 and 500 nm of dry particle diameter. Spatial and temporal distribution of aerosols in the particle sizes most important for climate applications are presented. We also analyse the annual, weekly and diurnal cycles of the aerosol number concentrations, provide log-normal fitting parameters for median number size distributions, and give guidance notes for data users. Emphasis is placed on the usability of results within the aerosol modelling community. We also show that the aerosol number concentrations of Aitken and accumulation mode particles (with 100 nm dry diameter as a cut-off between modes) are related, although there is significant variation in the ratios of the modal number concentrations. Different aerosol and station types are distinguished from this data and this methodology has potential for further categorization of stations aerosol number size distribution types. The European submicron aerosol was divided into characteristic types: Central European aerosol, characterized by single mode median size distributions, unimodal number concentration histograms and low variability in CCN-sized aerosol number concentrations; Nordic aerosol with low number concentrations, although showing pronounced seasonal variation of especially Aitken mode particles; Mountain sites (altitude over 1000 m a.s.l.) with a strong seasonal cycle in aerosol number concentrations, high variability, and very low median number concentrations. Southern and Western European regions had fewer stations, which decreases the regional coverage of these results. Aerosol number concentrations over the Britain and Ireland had very high variance and there are indications of mixed air masses from several source regions; the Mediterranean aerosol exhibit high seasonality, and a strong accumulation mode in the summer. The greatest concentrations were observed at the Ispra station in Northern Italy with high accumulation mode number concentrations in the winter. The aerosol number concentrations at the Arctic station Zeppelin in Ny-Ålesund in Svalbard have also a strong seasonal cycle, with greater concentrations of accumulation mode particles in winter, and dominating summer Aitken mode indicating more recently formed particles. Observed particles did not show any statistically significant regional work-week or weekday related variation in number concentrations studied. Analysis products are made for open-access to the research community, available in a freely accessible internet site. The results give to the modelling community a reliable, easy-to-use and freely available comparison dataset of aerosol size distributions.
Resumo:
Background: Microarray data is frequently used to characterize the expression profile of a whole genome and to compare the characteristics of that genome under several conditions. Geneset analysis methods have been described previously to analyze the expression values of several genes related by known biological criteria (metabolic pathway, pathology signature, co-regulation by a common factor, etc.) at the same time and the cost of these methods allows for the use of more values to help discover the underlying biological mechanisms. Results: As several methods assume different null hypotheses, we propose to reformulate the main question that biologists seek to answer. To determine which genesets are associated with expression values that differ between two experiments, we focused on three ad hoc criteria: expression levels, the direction of individual gene expression changes (up or down regulation), and correlations between genes. We introduce the FAERI methodology, tailored from a two-way ANOVA to examine these criteria. The significance of the results was evaluated according to the self-contained null hypothesis, using label sampling or by inferring the null distribution from normally distributed random data. Evaluations performed on simulated data revealed that FAERI outperforms currently available methods for each type of set tested. We then applied the FAERI method to analyze three real-world datasets on hypoxia response. FAERI was able to detect more genesets than other methodologies, and the genesets selected were coherent with current knowledge of cellular response to hypoxia. Moreover, the genesets selected by FAERI were confirmed when the analysis was repeated on two additional related datasets. Conclusions: The expression values of genesets are associated with several biological effects. The underlying mathematical structure of the genesets allows for analysis of data from several genes at the same time. Focusing on expression levels, the direction of the expression changes, and correlations, we showed that two-step data reduction allowed us to significantly improve the performance of geneset analysis using a modified two-way ANOVA procedure, and to detect genesets that current methods fail to detect.
Resumo:
Objective Analyzing the geographical distribution of the tuberculosis (TB), its incidence and prevalence and TB-HIV coinfection in the districts of Porto Alegre from 2007 to 2011. Method An ecological, descriptive study of time series that used descriptive and geoprocessing techniques. Results In total, were recorded 3,369 incident cases and 3,998 prevalent cases of pulmonary TB. In both contexts, there was predominance of cases in males and in Caucasians. Seventeen districts showed prevalence rates above 79.2 cases/100,000 inhabitants, considering that 15 of them had incidence rates above 73.7 cases/100,000 inhabitants. The TB-HIV coinfection rates reached 67% in some districts, which is above the city average value (30%). Conclusion The distribution analysis showed that the reformulation and restructuring of policies and health services in Porto Alegre are essential.