72 resultados para Computational Geometry and Object Modeling
Resumo:
The long observational record is critical to our understanding of the Earth’s climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. The purpose of this paper is to both review recent progress in reprocessing and reanalyzing observations, and summarize the challenges that must be overcome in order to improve our understanding of climate and variability. Reprocessing improves data quality through more scrutiny and improved retrieval techniques for individual observing systems, while reanalysis merges many disparate observations with models through data assimilation, yet both aim to provide a climatology of Earth processes. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Oceanic reanalyses have made significant advances in recent years, but will only be discussed here in terms of progress toward integrated Earth system analyses. Climate data sets are generally adequate for process studies and large-scale climate variability. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. It must be emphasized that careful investigation of the data and processing methods are required to use the observations appropriately.
Resumo:
Floods are a major threat to human existence and historically have both caused the collapse of civilizations and forced the emergence of new cultures. The physical processes of flooding are complex. Increased population, climate variability, change in catchment and channel management, modified landuse and land cover, and natural change of floodplains and river channels all lead to changes in flood dynamics, and as a direct or indirect consequence, social welfare of humans. Section 5.16.1 explores the risks and benefits brought about by floods and reviews the responses of floods and floodplains to climate and landuse change. Section 5.08.2 reviews the existing modeling tools, and the top–down and bottom–up modeling frameworks that are used to assess impacts on future floods. Section 5.08.3 discusses changing flood risk and socioeconomic vulnerability based on current trends in emerging or developing countries and presents an alternative paradigm as a pathway to resilience. Section 5.08.4 concludes the chapter by stating a portfolio of integrated concepts, measures, and avant-garde thinking that would be required to sustainably manage future flood risk.
Resumo:
Lipid cubic phases are complex nanostructures that form naturally in a variety of biological systems, with applications including drug delivery and nanotemplating. Most X-ray scattering studies on lipid cubic phases have used unoriented polydomain samples as either bulk gels or suspensions of micrometer-sized cubosomes. We present a method of investigating cubic phases in a new form, as supported thin films that can be analyzed using grazing incidence small-angle X-ray scattering (GISAXS). We present GISAXS data on three lipid systems: phytantriol and two grades of monoolein (research and industrial). The use of thin films brings a number of advantages. First, the samples exhibit a high degree of uniaxial orientation about the substrate normal. Second, the new morphology allows precise control of the substrate mesophase geometry and lattice parameter using a controlled temperature and humidity environment, and we demonstrate the controllable formation of oriented diamond and gyroid inverse bicontinuous cubic along with lamellar phases. Finally, the thin film morphology allows the induction of reversible phase transitions between these mesophase structures by changes in humidity on subminute time scales, and we present timeresolved GISAXS data monitoring these transformations.
Resumo:
Language processing plays a crucial role in language development, providing the ability to assign structural representations to input strings (e.g., Fodor, 1998). In this paper we aim at contributing to the study of children's processing routines, examining the operations underlying the auditory processing of relative clauses in children compared to adults. English-speaking children (6–8;11) and adults participated in the study, which employed a self-paced listening task with a final comprehension question. The aim was to determine (i) the role of number agreement in object relative clauses in which the subject and object NPs differ in terms of number properties, and (ii) the role of verb morphology (active vs. passive) in subject relative clauses. Even though children's off-line accuracy was not always comparable to that of adults, analyses of reaction times results support the view that children have the same structural processing reflexes observed in adults.
Resumo:
Factor forecasting models are shown to deliver real-time gains over autoregressive models for US real activity variables during the recent period, but are less successful for nominal variables. The gains are largely due to the Financial Crisis period, and are primarily at the shortest (one quarter ahead) horizon. Excluding the pre-Great Moderation years from the factor forecasting model estimation period (but not from the data used to extract factors) results in a marked fillip in factor model forecast accuracy, but does the same for the AR model forecasts. The relative performance of the factor models compared to the AR models is largely unaffected by whether the exercise is in real time or is pseudo out-of-sample.
Resumo:
The assessment of chess players is an increasingly attractive opportunity and an unfortunate necessity. The chess community needs to limit potential reputational damage by inhibiting cheating and unjustified accusations of cheating: there has been a recent rise in both. A number of counter-intuitive discoveries have been made by benchmarking the intrinsic merit of players’ moves: these call for further investigation. Is Capablanca actually, objectively the most accurate World Champion? Has ELO rating inflation not taken place? Stimulated by FIDE/ACP, we revisit the fundamentals of the subject to advance a framework suitable for improved standards of computational experiment and more precise results. Other domains look to chess as the demonstrator of good practice, including the rating of professionals making high-value decisions under pressure, personnel evaluation by Multichoice Assessment and the organization of crowd-sourcing in citizen science projects. The ‘3P’ themes of performance, prediction and profiling pervade all these domains.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
Key Performance Indicators (KPIs) are the main instruments of Business Performance Management. KPIs are the measures that are translated to both the strategy and the business process. These measures are often designed for an industry sector with the assumptions about business processes in organizations. However, the assumptions can be too incomplete to guarantee the required properties of KPIs. This raises the need to validate the properties of KPIs prior to their application to performance measurement. This paper applies the method called EXecutable Requirements Engineering Management and Evolution (EXTREME) for validation of the KPI definitions. EXTREME semantically relates the goal modeling, conceptual modeling and protocol modeling techniques into one methodology. The synchronous composition built into protocol modeling enables raceability of goals in protocol models and constructive definitions of a KPI. The application of the method clarifies the meaning of KPI properties and procedures of their assessment and validation.
Resumo:
The efficiency of a Wireless Power Transfer (WPT) system is greatly dependent on both the geometry and operating frequency of the transmitting and receiving structures. By using Coupled Mode Theory (CMT), the figure of merit is calculated for resonantly-coupled loop and dipole systems. An in-depth analysis of the figure of merit is performed with respect to the key geometric parameters of the loops and dipoles, along with the resonant frequency, in order to identify the key relationships leading to high-efficiency WPT. For systems consisting of two identical single-turn loops, it is shown that the choice of both the loop radius and resonant frequency are essential in achieving high-efficiency WPT. For the dipole geometries studied, it is shown that the choice of length is largely irrelevant and that as a result of their capacitive nature, low-MHz frequency dipoles are able to produce significantly higher figures of merit than those of the loops considered. The results of the figure of merit analysis are used to propose and subsequently compare two mid-range loop and dipole WPT systems of equal size and operating frequency, where it is shown that the dipole system is able to achieve higher efficiencies than the loop system of the distance range examined.
Resumo:
The present study investigates the parsing of pre-nominal relative clauses (RCs) in children for the first time with a realtime methodology that reveals moment-to-moment processing patterns as the sentence unfolds. A self-paced listening experiment with Turkish-speaking children (aged 5–8) and adults showed that both groups display a sign of processing cost both in subject and object RCs at different points through the flow of the utterance when integrating the cues that are uninformative (i.e., ambiguous in function) and that are structurally and probabilistically unexpected. Both groups show a processing facilitation as soon as the morphosyntactic dependencies are completed and parse the unbounded dependencies rapidly using the morphosyntactic cues rather than waiting for the clause-final filler. These findings show that five-year-old children show similar patterns to adults in processing the morphosyntactic cues incrementally and in forming expectations about the rest of the utterance on the basis of the probabilistic model of their language.
Resumo:
The products of reactions of the pharmaceutical amide carbamazepine (CBZ) with strong acids under aqueous conditions were investigated by both powder and single crystal X-ray diffraction. Despite previous claims to the contrary, it was found that salt forms with CBZ protonated at the amide O atom could be isolated from reactions with both HCl and HBr. These forms include the newly identified hydrate phase [CBZ(H)][Cl]·H O. Reactions with other mineral acids (HI and HBF ) gave ionic cocrystalline (ICC) forms (CBZ· [acridinium][I ]·2.5I and CBZ·[H O ] [BF ] ·H O) as well as the salt form CBZ·[CBZ(H)][BF ]·0.5H O. Reaction 2 4 3 2 5 2 0.25 4 0.25 2 4 2 of CBZ with a series of sulfonic acids also gave salt forms, namely, [CBZ(H)][O SC H ], [CBZ(H)][O SC H (OH)]· 3 6 5 3 6 4 0.5H O, [CBZ(H)] [O SCH CH SO ], and [CBZ(H)][O SC H (OH) (COOH)]·H O. CBZ and protonated CBZ(H) 2 2 3 2 2 3 3 6 3 2 moieties can be differentiated in the solid state both by changes to molecular geometry and by differing packing preferences
Resumo:
A Universal Serial Bus (USB) Mass Storage Device (MSD), often termed a USB flash drive, is ubiquitously used to store important information in unencrypted binary format. This low cost consumer device is incredibly popular due to its size, large storage capacity and relatively high transfer speed. However, if the device is lost or stolen an unauthorized person can easily retrieve all the information. Therefore, it is advantageous in many applications to provide security protection so that only authorized users can access the stored information. In order to provide security protection for a USB MSD, this paper proposes a session key agreement protocol after secure user authentication. The main aim of this protocol is to establish session key negotiation through which all the information retrieved, stored and transferred to the USB MSD is encrypted. This paper not only contributes an efficient protocol, but also does not suffer from the forgery attack and the password guessing attack as compared to other protocols in the literature. This paper analyses the security of the proposed protocol through a formal analysis which proves that the information is stored confidentially and is protected offering strong resilience to relevant security attacks. The computational cost and communication cost of the proposed scheme is analyzed and compared to related work to show that the proposed scheme has an improved tradeoff for computational cost, communication cost and security.