960 resultados para Module Maximum
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.
Resumo:
The Last Glacial Maximum (LGM) exhibits different large-scale atmospheric conditions compared to present-day climate due to altered boundary conditions. The regional atmospheric circulation and associated precipitation patterns over Europe are characterized for the first time with a weather typing approach (circulation weather types, CWT) for LGM paleoclimate simulations. The CWT approach is applied to four representative regions across Europe. While the CWTs over Western Europe are prevailing westerly for both present-day and LGM conditions, considerable differences are identified elsewhere: Southern Europe experienced more frequent westerly and cyclonic CWTs under LGM conditions, while Central and Eastern Europe was predominantly affected by southerly and easterly flow patterns. Under LGM conditions, rainfall is enhanced over Western Europe but is reduced over most of Central and Eastern Europe. These differences are explained by changing CWT frequencies and evaporation patterns over the North Atlantic Ocean. The regional differences of the CWTs and precipitation patterns are linked to the North Atlantic storm track, which was stronger over Europe in all considered models during the LGM, explaining the overall increase of the cyclonic CWT. Enhanced evaporation over the North Atlantic leads to higher moisture availability over the ocean. Despite the overall cooling during the LGM, this explains the enhanced precipitation over southwestern Europe, particularly Iberia. This study links large-scale atmospheric dynamics to the regional circulation and associated precipitation patterns and provides an improved regional assessment of the European climate under LGM conditions.
Resumo:
Recently, the deterministic tourist walk has emerged as a novel approach for texture analysis. This method employs a traveler visiting image pixels using a deterministic walk rule. Resulting trajectories provide clues about pixel interaction in the image that can be used for image classification and identification tasks. This paper proposes a new walk rule for the tourist which is based on contrast direction of a neighborhood. The yielded results using this approach are comparable with those from traditional texture analysis methods in the classification of a set of Brodatz textures and their rotated versions, thus confirming the potential of the method as a feasible texture analysis methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A bipartite graph G = (V, W, E) is convex if there exists an ordering of the vertices of W such that, for each v. V, the neighbors of v are consecutive in W. We describe both a sequential and a BSP/CGM algorithm to find a maximum independent set in a convex bipartite graph. The sequential algorithm improves over the running time of the previously known algorithm and the BSP/CGM algorithm is a parallel version of the sequential one. The complexity of the algorithms does not depend on |W|.
Resumo:
This paper develops a bias correction scheme for a multivariate heteroskedastic errors-in-variables model. The applicability of this model is justified in areas such as astrophysics, epidemiology and analytical chemistry, where the variables are subject to measurement errors and the variances vary with the observations. We conduct Monte Carlo simulations to investigate the performance of the corrected estimators. The numerical results show that the bias correction scheme yields nearly unbiased estimates. We also give an application to a real data set.
Resumo:
In this paper, we define and study a special type of trisections in a module category, namely the compact trisections which characterize quasi-directed components. We apply this notion to the study of laura algebras and we use it to define a class of algebras with predictable Auslander-Reiten components.
Resumo:
We give a general matrix formula for computing the second-order skewness of maximum likelihood estimators. The formula was firstly presented in a tensorial version by Bowman and Shenton (1998). Our matrix formulation has numerical advantages, since it requires only simple operations on matrices and vectors. We apply the second-order skewness formula to a normal model with a generalized parametrization and to an ARMA model. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A. G. Patriota and A. J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655-1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.
Resumo:
The purpose of the work is to develop a cost effective semistationary CPC concentrator for a string PV-module. A novel method of using annual irradiation distribution diagram projected in a north-south vertical plane is developed. This method allows us easily to determine the optimum acceptance angle of the concentrator and the required number of annual tilts. Concentration ranges of 2-5x are investigated with corresponding acceptance angles between 5 and 15°. The concentrator should be tilted 2-6 times per year. Experiments has been performed on a string module of 10 cells connected in a series and equipped with a compound parabolic concentrator with C = 3.3X. Measurement show that the output will increase with a factor of 2-2.5 for the concentrator module, compared to a reference module without concentrator. If very cheap aluminium reflectors are used the costs for the PV-module can be decreased nearly by a factor of two.
Resumo:
Modular product architectures have generated numerous benefits for companies in terms of cost, lead-time and quality. The defined interfaces and the module’s properties decrease the effort to develop new product variants, and provide an opportunity to perform parallel tasks in design, manufacturing and assembly. The background of this thesis is that companies perform verifications (tests, inspections and controls) of products late, when most of the parts have been assembled. This extends the lead-time to delivery and ruins benefits from a modular product architecture; specifically when the verifications are extensive and the frequency of detected defects is high. Due to the number of product variants obtained from the modular product architecture, verifications must handle a wide range of equipment, instructions and goal values to ensure that high quality products can be delivered. As a result, the total benefits from a modular product architecture are difficult to achieve. This thesis describes a method for planning and performing verifications within a modular product architecture. The method supports companies by utilizing the defined modules for verifications already at module level, so called MPV (Module Property Verification). With MPV, defects are detected at an earlier point, compared to verification of a complete product, and the number of verifications is decreased. The MPV method is built up of three phases. In Phase A, candidate modules are evaluated on the basis of costs and lead-time of the verifications and the repair of defects. An MPV-index is obtained which quantifies the module and indicates if the module should be verified at product level or by MPV. In Phase B, the interface interaction between the modules is evaluated, as well as the distribution of properties among the modules. The purpose is to evaluate the extent to which supplementary verifications at product level is needed. Phase C supports a selection of the final verification strategy. The cost and lead-time for the supplementary verifications are considered together with the results from Phase A and B. The MPV method is based on a set of qualitative and quantitative measures and tools which provide an overview and support the achievement of cost and time efficient company specific verifications. A practical application in industry shows how the MPV method can be used, and the subsequent benefits
Resumo:
Many companies implement a modular architecture to support the need to create more variants with less effort. Although the modular architecture has many benefits, the tests to detect any defects become a major challenge. However, a modular architecture with defined functional elements seems beneficial to test at module level, so called MPV (Module Property Verification). This paper presents studies from 29 companies with the purpose of showing trends in the occurrence of defects and how these can support the MPV.