932 resultados para Most Productive Scale Size
Resumo:
Results of experimental investigations on the relationship between nanoscale morphology of carbon doped hydrogenated silicon-oxide (SiOCH) low-k films and their electron spectrum of defect states are presented. The SiOCH films have been deposited using trimethylsilane (3MS) - oxygen mixture in a 13.56 MHz plasma enhanced chemical vapor deposition (PECVD) system at variable RF power densities (from 1.3 to 2.6 W/cm2) and gas pressures of 3, 4, and 5 Torr. The atomic structure of the SiOCH films is a mixture of amorphous-nanocrystalline SiO2-like and SiC-like phases. Results of the FTIR spectroscopy and atomic force microscopy suggest that the volume fraction of the SiC-like phase increases from ∼0.2 to 0.4 with RF power. The average size of the nanoscale surface morphology elements of the SiO2-like matrix can be controlled by the RF power density and source gas flow rates. Electron density of the defect states N(E) of the SiOCH films has been investigated with the DLTS technique in the energy range up to 0.6 eV from the bottom of the conduction band. Distinct N(E) peaks at 0.25 - 0.35 eV and 0.42 - 0.52 eV below the conduction band bottom have been observed. The first N(E) peak is identified as originated from E1-like centers in the SiC-like phase. The volume density of the defects can vary from 1011 - 1017 cm-3 depending on specific conditions of the PECVD process.
Resumo:
This study examines the role that the size of a victimised organisation and the size of the victim’s loss have on attitudes regarding the acceptance or unacceptance of 12 questionable consumer actions. A sample of 815 American adults rated each scenario on a scale anchored by very acceptable and very unacceptable. It was shown that the size of the victimised organisation tends to influence consumers’ opinions with more disdain directed towards consumers who take advantage of smaller businesses. Similarly, the respondents tended to be more critical of these actions when the loss incurred by the victimised organisation was large. A 2x2 matrix concurrently delineated the nature of the extent to which opinions regarding the 12 actions differed depending upon the mediating variable under scrutiny.
Resumo:
The problem of clustering a large document collection is not only challenged by the number of documents and the number of dimensions, but it is also affected by the number and sizes of the clusters. Traditional clustering methods fail to scale when they need to generate a large number of clusters. Furthermore, when the clusters size in the solution is heterogeneous, i.e. some of the clusters are large in size, the similarity measures tend to degrade. A ranking based clustering method is proposed to deal with these issues in the context of the Social Event Detection task. Ranking scores are used to select a small number of most relevant clusters in order to compare and place a document. Additionally,instead of conventional cluster centroids, cluster patches are proposed to represent clusters, that are hubs-like set of documents. Text, temporal, spatial and visual content information collected from the social event images is utilized in calculating similarity. Results show that these strategies allow us to have a balance between performance and accuracy of the clustering solution gained by the clustering method.
Resumo:
Summary 1. Acoustic methods are used increasingly to survey and monitor bat populations. However, the use of acoustic methods at continental scales can be hampered by the lack of standardized and objective methods to identify all species recorded. This makes comparable continent-wide monitoring difficult, impeding progress towards developing biodiversity indicators, transboundary conservation programmes and monitoring species distribution changes. 2. Here we developed a continental-scale classifier for acoustic identification of bats, which can be used throughout Europe to ensure objective, consistent and comparable species identifications. We selected 1350 full-spectrum reference calls from a set of 15 858 calls of 34 European species, from EchoBank, a global echolocation call library. We assessed 24 call parameters to evaluate how well they distinguish between species and used the 12 most useful to train a hierarchy of ensembles of artificial neural networks to distinguish the echolocation calls of these bat species. 3. Calls are first classified to one of five call-type groups, with a median accuracy of 97·6%. The median species-level classification accuracy is 83·7%, providing robust classification for most European species, and an estimate of classification error for each species. 4. These classifiers were packaged into an online tool, iBatsID, which is freely available, enabling anyone to classify European calls in an objective and consistent way, allowing standardized acoustic identification across the continent. 5. Synthesis and applications. iBatsID is the first freely available and easily accessible continental- scale bat call classifier, providing the basis for standardized, continental acoustic bat monitoring in Europe. This method can provide key information to managers and conservation planners on distribution changes and changes in bat species activity through time.
Resumo:
There is an increasing need in biology and clinical medicine to robustly and reliably measure tens-to-hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma, and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and 7 control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to sub-nanogram/mL sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and inter-laboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy isotope labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an inter-laboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality c`ontrol measures, enables sensitive, specific, reproducible and quantitative measurements of proteins and peptides in complex biological matrices such as plasma.
Resumo:
Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
Background Cancer-related fatigue (CRF) is the most common and distressing symptom reported by breast cancer survivors. The primary aim of this study was to translate and evaluate psychometrically for the first time a Spanish version of the Piper Fatigue Scale-Revised (S-PFS-R). Methods One hundred and eleven women with stage I–IIIA breast cancer who had completed their primary cancer therapy in the previous 6 months with the exception of hormone therapy completed the S-PFS-R, the Profile of Mood States (POMS) Fatigue (POMS-F) and Vigor subscales (POMS-V), and bilateral force handgrip testing. Data analysis included test–retest reliability, construct validity, criterion-related validity, and exploratory factor analyses. Results Test–retest reliability was satisfactory (r > 0.86), and all subscales showed moderate to high construct validity estimates [corrected item-subscale correlations (Pearson r = ≥ 0.65)]. The exploratory factor analysis revealed four dimensions with 75.5 % of the common variance explained. The S-PFS-R total score positively correlated with the POMS-F subscale (r = 0.50–0.78) and negatively with the POMS-V subscale (r = −0.13 to −0.44) confirming criterion-related validity. Negative correlations among force handgrip testing, subscales, and total scores were weak (r = −0.26 to −0.29). Conclusions The Spanish version of PFS-R shows satisfactory psychometric properties in a sample of breast cancer survivors. This is the first study to translate the PFS-R into Spanish and further testing is warranted.
Resumo:
Brain asymmetry has been a topic of interest for neuroscientists for many years. The advent of diffusion tensor imaging (DTI) allows researchers to extend the study of asymmetry to a microscopic scale by examining fiber integrity differences across hemispheres rather than the macroscopic differences in shape or structure volumes. Even so, the power to detect these microarchitectural differences depends on the sample size and how the brain images are registered and how many subjects are studied. We fluidly registered 4 Tesla DTI scans from 180 healthy adult twins (45 identical and fraternal pairs) to a geometrically-centered population mean template. We computed voxelwise maps of significant asymmetries (left/right hemisphere differences) for common fiber anisotropy indices (FA, GA). Quantitative genetic models revealed that 47-62% of the variance in asymmetry was due to genetic differences in the population. We studied how these heritability estimates varied with the type of registration target (T1- or T2-weighted) and with sample size. All methods consistently found that genetic factors strongly determined the lateralization of fiber anisotropy, facilitating the quest for specific genes that might influence brain asymmetry and fiber integrity.
Resumo:
With the advent of functional neuroimaging techniques, in particular functional magnetic resonance imaging (fMRI), we have gained greater insight into the neural correlates of visuospatial function. However, it may not always be easy to identify the cerebral regions most specifically associated with performance on a given task. One approach is to examine the quantitative relationships between regional activation and behavioral performance measures. In the present study, we investigated the functional neuroanatomy of two different visuospatial processing tasks, judgement of line orientation and mental rotation. Twenty-four normal participants were scanned with fMRI using blocked periodic designs for experimental task presentation. Accuracy and reaction time (RT) to each trial of both activation and baseline conditions in each experiment was recorded. Both experiments activated dorsal and ventral visual cortical areas as well as dorsolateral prefrontal cortex. More regionally specific associations with task performance were identified by estimating the association between (sinusoidal) power of functional response and mean RT to the activation condition; a permutation test based on spatial statistics was used for inference. There was significant behavioral-physiological association in right ventral extrastriate cortex for the line orientation task and in bilateral (predominantly right) superior parietal lobule for the mental rotation task. Comparable associations were not found between power of response and RT to the baseline conditions of the tasks. These data suggest that one region in a neurocognitive network may be most strongly associated with behavioral performance and this may be regarded as the computationally least efficient or rate-limiting node of the network.
Resumo:
In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.
Resumo:
In fisheries managed using individual transferable quotas (ITQs) it is generally assumed that quota markets are well-functioning, allowing quota to flow on either a temporary or permanent basis to those able to make best use of it. However, despite an increasing number of fisheries being managed under ITQs, empirical assessments of the quota markets that have actually evolved in these fisheries remain scarce. The Queensland Coral Reef Fin-Fish Fishery (CRFFF) on the Great Barrier Reef has been managed under a system of ITQs since 2004. Data on individual quota holdings and trades for the period 2004-2012 were used to assess the CRFFF quota market and its evolution through time. Network analysis was applied to assess market structure and the nature of lease-trading relationships. An assessment of market participants’ abilities to balance their quota accounts, i.e., gap analysis, provided insights into market functionality and how this may have changed in the period observed. Trends in ownership and trade were determined, and market participants were identified as belonging to one out of a set of seven generalized types. The emergence of groups such as investors and lease-dependent fishers is clear. In 2011-2012, 41% of coral trout quota was owned by participants that did not fish it, and 64% of total coral trout landings were made by fishers that owned only 10% of the quota. Quota brokers emerged whose influence on the market varied with the bioeconomic conditions of the fishery. Throughout the study period some quota was found to remain inactive, implying potential market inefficiencies. Contribution to this inactivity appeared asymmetrical, with most residing in the hands of smaller quota holders. The importance of transaction costs in the operation of the quota market and the inequalities that may result are discussed in light of these findings
Resumo:
The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.
Resumo:
Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.