866 resultados para global industry classification standard


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Single-channel Fusion ARTMAP is functionally equivalent to Fuzzy ART during unsupervised learning and to Fuzzy ARTMAP during supervised learning. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, become inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called paraellel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of them. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network. Fusion ARTMAP's multi-channel coding is illustrated by simulations of the Quadruped Mammal database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fungal spoilage is the most common type of microbial spoilage in food leading to significant economical and health problems throughout the world. Fermentation by lactic acid bacteria (LAB) is one of the oldest and most economical methods of producing and preserving food. Thus, LAB can be seen as an interesting tool in the development of novel bio-preservatives for food industry. The overall objective of this study was to demonstrate, that LAB can be used as a natural way to improve the shelf-life and safety of a wide range of food products. In the first part of the thesis, 116 LAB isolates were screened for their antifungal activity against four Aspergillus and Penicillium spp. commonly found in food. Approximately 83% of them showed antifungal activity, but only 1% showed a broad range antifungal activity against all tested fungi. The second approach was to apply LAB antifungal strains in production of food products with extended shelf-life. L. reuteri R29 strain was identified as having strong antifungal activity in vitro, as well as in sourdough bread against Aspergillus niger, Fusarium culmorum and Penicillium expansum. The ability of the strain to produce bread of good quality was also determined using standard baking tests. Another strain, L. amylovorus DSM19280, was also identified as having strong antifungal activity in vitro and in vivo. The strain was used as an adjunct culture in a Cheddar cheese model system and demonstrated the inhibition of P. expansum. Significantly, its presence had no detectable negative impact on cheese quality as determined by analysis of moisture, salt, pH, and primary and secondary proteolysis. L. brevis PS1 a further strain identified during the screening as very antifungal, showed activity in vitro against common Fusarium spp. and was used in the production of a novel functional wortbased alcohol-free beverage. Challenge tests performed with F. culmorum confirmed the effectiveness of the antifungal strain in vivo. The shelf-life of the beverage was extended significantly when compared to not inoculated wort sample. A range of antifungal compounds were identified for the 4 LAB strains, namely L. reuteri ee1p, L. reuteri R29, L. brevis PS1 and L. amylovorous DSM20531. The identification of the compounds was based on liquid chromatography interfaced to the mass spectrometer and PDA detector

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gliomagenesis is driven by a complex network of genetic alterations and while the glioma genome has been a focus of investigation for many years; critical gaps in our knowledge of this disease remain. The identification of novel molecular biomarkers remains a focus of the greater cancer community as a method to improve the consistency and accuracy of pathological diagnosis. In addition, novel molecular biomarkers are drastically needed for the identification of targets that may ultimately result in novel therapeutics aimed at improving glioma treatment. Through the identification of new biomarkers, laboratories will focus future studies on the molecular mechanisms that underlie glioma development. Here, we report a series of genomic analyses identifying novel molecular biomarkers in multiple histopathological subtypes of glioma and refine the classification of malignant gliomas. We have completed a large scale analysis of the WHO grade II-III astrocytoma exome and report frequent mutations in the chromatin modifier, alpha thalassemia mental retardation x-linked (ATRX), isocitrate dehydrogenase 1 and 2 (IDH1 and IDH2), and mutations in tumor protein 53 (TP53) as the most frequent genetic mutations in low grade astrocytomas. Furthermore, by analyzing the status of recurrently mutated genes in 363 brain tumors, we establish that highly recurrent gene mutational signatures are an effective tool in stratifying homogeneous patient populations into distinct groups with varying outcomes, thereby capable of predicting prognosis. Next, we have established mutations in the promoter of telomerase reverse transcriptase (TERT) as a frequent genetic event in gliomas and in tissues with low rates of self renewal. We identify TERT promoter mutations as the most frequently mutated gene in primary glioblastoma. Additionally, we show that TERT promoter mutations in combination with IDH1 and IDH2 mutations are able to delineate distinct clinical tumor cohorts and are capable of predicting median overall survival more effectively than standard histopathological diagnosis alone. Taken together, these data advance our understanding of the genetic alterations that underlie the transformation of glial cells into neoplasms and we provide novel genetic biomarkers and multi – gene mutational signatures that can be utilized to refine the classification of malignant gliomas and provide opportunity for improved diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in technology, communication, and transportation over the past thirty years have led to tighter linkages and enhanced collaboration across traditional borders between nations, institutions, and cultures. This thesis uses the furniture industry as a lens to examine the impacts of globalization on individual countries and companies as they interact on an international scale. Using global value chain analysis and international trade data, I break down the furniture production process and explore how countries have specialized in particular stages of production to differentiate themselves from competitors and maximize the benefits of global involvement. Through interviews with company representatives and evaluation of branding strategies such as advertisements, webpages, and partnerships, I investigate across four country cases how furniture companies construct strong brands in an effort to stand out as unique to consumers with access to products made around the globe. Branding often serves to highlight distinctiveness and associate companies with national identities, thus revealing that in today’s globalized and interconnected society, local differences and diversity are more significant than ever.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A key challenge in promoting decent work worldwide is how to improve the position of both firms and workers in value chains and global production networks driven by lead firms. This article develops a framework for analysing the linkages between the economic upgrading of firms and the social upgrading of workers. Drawing on studies which indicate that firm upgrading does not necessarily lead to improvements for workers, with a particular focus on the Moroccan garment industry, it outlines different trajectories and scenarios to provide a better understanding of the relationship between economic and social upgrading. The authors 2011 Journal compilation © International Labour Organization 2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rise of private food standards has brought forth an ongoing debate about whether they work as a barrier for smallholders and hinder poverty reduction in developing countries. This paper uses a global value chain approach to explain the relationship between value chain structure and agrifood safety and quality standards and to discuss the challenges and possibilities this entails for the upgrading of smallholders. It maps four potential value chain scenarios depending on the degree of concentration in the markets for agrifood supply (farmers and manufacturers) and demand (supermarkets and other food retailers) and discusses the impact of lead firms and key intermediaries on smallholders in different chain situations. Each scenario is illustrated with case examples. Theoretical and policy issues are discussed, along with proposals for future research in terms of industry structure, private governance, and sustainable value chains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The various contributions to this book have documented how NAFTA-inspired firm strategies are changing the geography of apparel production in North America. The authors show in myriad ways how companies at different positions along the apparel commodity chain are responding to the new institutional and regulatory environment that NAFTA creates. By making it easier for U.S. companies to take advantage of Mexico as a nearby low-cost site for export-oriented apparel production, NAFTA is deepening the regional division of labor within North America, and this process has consequences for firms and workers in each of the signatory countries. In the introduction to this book we alluded to the obvious implications of shifting investment and trade patterns in the North American apparel industry for employment in the different countries. In this concluding chapter we focus on Mexico in the NAFTA era, specifically the extent to which Mexico's role in the North American economy facilitates or inhibits its economic development. W e begin with a discussion of the contemporary debate about Mexico's development, which turns on the question of how to assess the implications of Mexico's rapid and pro-found process of economic reform. Second, we focus on the textile and apparel industries as sectors that have been significantly affected by changes in regulatory environments at both the global and regional levels. Third, we examine the evidence regarding Mexico's NAFTA-era export dynamism, and in particular we emphasize the importance of interfirm networks, both for making sense of Mexico's meteoric rise among apparel exporters and for evaluating the implications of this dynamism for development. Fourth, we turn to a consideration of the national political-economic environment that shapes developmental outcomes for all Mexicans. Although regional disparities within Mexico are profound, aspects of government policy, such as management of the national currency, and characteristics of the institutional environment, such as industrial relations, have nationwide effects, and critics of NAFTA charge that these factors are contributing to a process of economic and social polarization that is ever more evident (Morales 1999; Dussel Peters 2000). Finally, we suggest that the mixed consequences of Mexico's NAFTA-era growth can be taken as emblematic of the contradictions that the process of globalization poses for economic and social development. The anti-sweatshop campaign in North America is one example of transnational or crossborder movements that are emerging to address the negative consequences of this process. In bringing attention to the problem of sweatshop production in North America, activists are developing strategies that rely on a network logic that is not dissimilar to the approaches reflected in the various chapters of this book. © 2009 by Temple University Press. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic taxonomic categorisation of 23 species of dinoflagellates was demonstrated using field-collected specimens. These dinoflagellates have been responsible for the majority of toxic and noxious phytoplankton blooms which have occurred in the coastal waters of the European Union in recent years and make severe impact on the aquaculture industry. The performance by human 'expert' ecologists/taxonomists in identifying these species was compared to that achieved by 2 artificial neural network classifiers (multilayer perceptron and radial basis function networks) and 2 other statistical techniques, k-Nearest Neighbour and Quadratic Discriminant Analysis. The neural network classifiers outperform the classical statistical techniques. Over extended trials, the human experts averaged 85% while the radial basis network achieved a best performance of 83%, the multilayer perceptron 66%, k-Nearest Neighbour 60%, and the Quadratic Discriminant Analysis 56%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The absorption spectra of phytoplankton in the visible domain hold implicit information on the phytoplankton community structure. Here we use this information to retrieve quantitative information on phytoplankton size structure by developing a novel method to compute the exponent of an assumed power-law for their particle-size spectrum. This quantity, in combination with total chlorophyll-a concentration, can be used to estimate the fractional concentration of chlorophyll in any arbitrarily-defined size class of phytoplankton. We further define and derive expressions for two distinct measures of cell size of mixed. populations, namely, the average spherical diameter of a bio-optically equivalent homogeneous population of cells of equal size, and the average equivalent spherical diameter of a population of cells that follow a power-law particle-size distribution. The method relies on measurements of two quantities of a phytoplankton sample: the concentration of chlorophyll-a, which is an operational index of phytoplankton biomass, and the total absorption coefficient of phytoplankton in the red peak of visible spectrum at 676 nm. A sensitivity analysis confirms that the relative errors in the estimates of the exponent of particle size spectra are reasonably low. The exponents of phytoplankton size spectra, estimated for a large set of in situ data from a variety of oceanic environments (similar to 2400 samples), are within a reasonable range; and the estimated fractions of chlorophyll in pico-, nano- and micro-phytoplankton are generally consistent with those obtained by an independent, indirect method based on diagnostic pigments determined using high-performance liquid chromatography. The estimates of cell size for in situ samples dominated by different phytoplankton types (diatoms, prymnesiophytes, Prochlorococcus, other cyanobacteria and green algae) yield nominal sizes consistent with the taxonomic classification. To estimate the same quantities from satellite-derived ocean-colour data, we combine our method with algorithms for obtaining inherent optical properties from remote sensing. The spatial distribution of the size-spectrum exponent and the chlorophyll fractions of pico-, nano- and micro-phytoplankton estimated from satellite remote sensing are in agreement with the current understanding of the biogeography of phytoplankton functional types in the global oceans. This study contributes to our understanding of the distribution and time evolution of phytoplankton size structure in the global oceans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agglomerative cluster analyses encompass many techniques, which have been widely used in various fields of science. In biology, and specifically ecology, datasets are generally highly variable and may contain outliers, which increase the difficulty to identify the number of clusters. Here we present a new criterion to determine statistically the optimal level of partition in a classification tree. The criterion robustness is tested against perturbated data (outliers) using an observation or variable with values randomly generated. The technique, called Random Simulation Test (RST), is tested on (1) the well-known Iris dataset [Fisher, R.A., 1936. The use of multiple measurements in taxonomic problems. Ann. Eugenic. 7, 179–188], (2) simulated data with predetermined numbers of clusters following Milligan and Cooper [Milligan, G.W., Cooper, M.C., 1985. An examination of procedures for determining the number of clusters in a data set. Psychometrika 50, 159–179] and finally (3) is applied on real copepod communities data previously analyzed in Beaugrand et al. [Beaugrand, G., Ibanez, F., Lindley, J.A., Reid, P.C., 2002. Diversity of calanoid copepods in the North Atlantic and adjacent seas: species associations and biogeography. Mar. Ecol. Prog. Ser. 232, 179–195]. The technique is compared to several standard techniques. RST performed generally better than existing algorithms on simulated data and proved to be especially efficient with highly variable datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates as well as consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover change (some including nitrogen–carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2005–2014), EFF was 9.0 ± 0.5 GtC yr−1, ELUC was 0.9 ± 0.5 GtC yr−1, GATM was 4.4 ± 0.1 GtC yr−1, SOCEAN was 2.6 ± 0.5 GtC yr−1, and SLAND was 3.0 ± 0.8 GtC yr−1. For the year 2014 alone, EFF grew to 9.8 ± 0.5 GtC yr−1, 0.6 % above 2013, continuing the growth trend in these emissions, albeit at a slower rate compared to the average growth of 2.2 % yr−1 that took place during 2005–2014. Also, for 2014, ELUC was 1.1 ± 0.5 GtC yr−1, GATM was 3.9 ± 0.2 GtC yr−1, SOCEAN was 2.9 ± 0.5 GtC yr−1, and SLAND was 4.1 ± 0.9 GtC yr−1. GATM was lower in 2014 compared to the past decade (2005–2014), reflecting a larger SLAND for that year. The global atmospheric CO2 concentration reached 397.15 ± 0.10 ppm averaged over 2014. For 2015, preliminary data indicate that the growth in EFF will be near or slightly below zero, with a projection of −0.6 [range of −1.6 to +0.5] %, based on national emissions projections for China and the USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the global economy for the rest of the world. From this projection of EFF and assumed constant ELUC for 2015, cumulative emissions of CO2 will reach about 555 ± 55 GtC (2035 ± 205 GtCO2) for 1870–2015, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quéré et al., 2015, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2015).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Item Response Theory, IRT, is a valuable methodology for analyzing the quality of the instruments utilized in assessment of academic achievement. This article presents an implementation of the mentioned theory, particularly of the Rasch model, in order to calibrate items and the instrument used in the classification test for the Basic Mathematics subject at Universidad Jorge Tadeo Lozano. 509 responses chains of students, obtained in the june 2011 application, were analyzed with a set of 45 items, through eight case studies that are showing progressive steps of calibration. Criteria of validity of items and of whole instrument were defined and utilized, to select groups of responses chains and items that were finally used in the determination of parameters which then allowed the classification of assessed students by the test.