911 resultados para top-down approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multilingual terminological resources do not always include valid equivalents of legal terms for two main reasons. Firstly, legal systems can differ from one language community to another and even from one country to another because each has its own history and traditions. As a result, the non-isomorphism between legal and linguistic systems may render the identification of equivalents a particularly challenging task. Secondly, by focusing primarily on the definition of equivalence, a notion widely discussed in translation but not in terminology, the literature does not offer solid and systematic methodologies for assigning terminological equivalents. As a result, there is a lack of criteria to guide both terminologists and translators in the search and validation of equivalent terms. This problem is even more evident in the case of predicative units, such as verbs. Although some terminologists (L‘Homme 1998; Lerat 2002; Lorente 2007) have worked on specialized verbs, terminological equivalence between units that belong to this part of speech would benefit from a thorough study. By proposing a novel methodology to assign the equivalents of specialized verbs, this research aims at defining validation criteria for this kind of predicative units, so as to contribute to a better understanding of the phenomenon of terminological equivalence as well as to the development of multilingual terminography in general, and to the development of legal terminography, in particular. The study uses a Portuguese-English comparable corpus that consists of a single genre of texts, i.e. Supreme Court judgments, from which 100 Portuguese and 100 English specialized verbs were selected. The description of the verbs is based on the theory of Frame Semantics (Fillmore 1976, 1977, 1982, 1985; Fillmore and Atkins 1992), on the FrameNet methodology (Ruppenhofer et al. 2010), as well as on the methodology for compiling specialized lexical resources, such as DiCoInfo (L‘Homme 2008), developed in the Observatoire de linguistique Sens-Texte at the Université de Montréal. The research reviews contributions that have adopted the same theoretical and methodological framework to the compilation of lexical resources and proposes adaptations to the specific objectives of the project. In contrast to the top-down approach adopted by FrameNet lexicographers, the approach described here is bottom-up, i.e. verbs are first analyzed and then grouped into frames for each language separately. Specialized verbs are said to evoke a semantic frame, a sort of conceptual scenario in which a number of mandatory elements (core Frame Elements) play specific roles (e.g. ARGUER, JUDGE, LAW), but specialized verbs are often accompanied by other optional information (non-core Frame Elements), such as the criteria and reasons used by the judge to reach a decision (statutes, codes, previous decisions). The information concerning the semantic frame that each verb evokes was encoded in an xml editor and about twenty contexts illustrating the specific way each specialized verb evokes a given frame were semantically and syntactically annotated. The labels attributed to each semantic frame (e.g. [Compliance], [Verdict]) were used to group together certain synonyms, antonyms as well as equivalent terms. The research identified 165 pairs of candidate equivalents among the 200 Portuguese and English terms that were grouped together into 76 frames. 71% of the pairs of equivalents were considered full equivalents because not only do the verbs evoke the same conceptual scenario but their actantial structures, the linguistic realizations of the actants and their syntactic patterns were similar. 29% of the pairs of equivalents did not entirely meet these criteria and were considered partial equivalents. Reasons for partial equivalence are provided along with illustrative examples. Finally, the study describes the semasiological and onomasiological entry points that JuriDiCo, the bilingual lexical resource compiled during the project, offers to future users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In now-a-days semiconductor and MEMS technologies the photolithography is the working horse for fabrication of functional devices. The conventional way (so called Top-Down approach) of microstructuring starts with photolithography, followed by patterning the structures using etching, especially dry etching. The requirements for smaller and hence faster devices lead to decrease of the feature size to the range of several nanometers. However, the production of devices in this scale range needs photolithography equipment, which must overcome the diffraction limit. Therefore, new photolithography techniques have been recently developed, but they are rather expensive and restricted to plane surfaces. Recently a new route has been presented - so-called Bottom-Up approach - where from a single atom or a molecule it is possible to obtain functional devices. This creates new field - Nanotechnology - where one speaks about structures with dimensions 1 - 100 nm, and which has the possibility to replace the conventional photolithography concerning its integral part - the self-assembly. However, this technique requires additional and special equipment and therefore is not yet widely applicable. This work presents a general scheme for the fabrication of silicon and silicon dioxide structures with lateral dimensions of less than 100 nm that avoids high-resolution photolithography processes. For the self-aligned formation of extremely small openings in silicon dioxide layers at in depth sharpened surface structures, the angle dependent etching rate distribution of silicon dioxide against plasma etching with a fluorocarbon gas (CHF3) was exploited. Subsequent anisotropic plasma etching of the silicon substrate material through the perforated silicon dioxide masking layer results in high aspect ratio trenches of approximately the same lateral dimensions. The latter can be reduced and precisely adjusted between 0 and 200 nm by thermal oxidation of the silicon structures owing to the volume expansion of silicon during the oxidation. On the basis of this a technology for the fabrication of SNOM calibration standards is presented. Additionally so-formed trenches were used as a template for CVD deposition of diamond resulting in high aspect ratio diamond knife. A lithography-free method for production of periodic and nonperiodic surface structures using the angular dependence of the etching rate is also presented. It combines the self-assembly of masking particles with the conventional plasma etching techniques known from microelectromechanical system technology. The method is generally applicable to bulk as well as layered materials. In this work, layers of glass spheres of different diameters were assembled on the sample surface forming a mask against plasma etching. Silicon surface structures with periodicity of 500 nm and feature dimensions of 20 nm were produced in this way. Thermal oxidation of the so structured silicon substrate offers the capability to vary the fill factor of the periodic structure owing to the volume expansion during oxidation but also to define silicon dioxide surface structures by selective plasma etching. Similar structures can be simply obtained by structuring silicon dioxide layers on silicon. The method offers a simple route for bridging the Nano- and Microtechnology and moreover, an uncomplicated way for photonic crystal fabrication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For those portfolio managers who follow a top-down approach to fund management when they are trying to develop a pan-European investment strategy they need to know which are the most important factors affecting property returns, so as to concentrate their management and research efforts accordingly. In order to examine this issue this paper examines the relative importance of country, sector and regional effects in determining property returns across Europe using the largest database of individual property returns currently available. Using annual data over the period 1996 to 2002 for a sample of over 25,000 properties the results show that the country-specific effects dominate sector-specific factors, which in turn dominate the regional-specific factors. This is true even for different sub-sets of countries and sectors. In other words, real estate returns are mainly determined by local (country specific) conditions and are only mildly affected by general European factors. Thus, for those institutional investors contemplating investment into Europe the first level of analysis must be an examination of the individual countries, followed by the prospects of the property sectors within the country and then an assessment of the differences in expected performance between the main city and the rest of the country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A stylised fact in the real estate portfolio diversification literature is that sector (property-type) effects are relatively more important than regional (geographical) factors in determining property returns. Thus, for those portfolio managers who follow a top-down approach to portfolio management, they should first choose in which sectors to invest and then select the best properties in each market. However, the question arises as to whether the dominance of the sector effects relative to regional effects is constant. If not property fund managers will need to take account of regional effects in developing their portfolio strategy. Using monthly data over the period 1987:1 to 2002:12 for a sample of over 1000 properties the results show that the sector-specific factors dominate the regional-specific factors for the vast majority of the time. Nonetheless, there are periods when the regional factors are of equal or greater importance than the sector effects. In particular, the sector effects tend to dominate during volatile periods of the real estate cycle; however, during calmer periods the sector and regional effects are of equal importance. These findings suggest that the sector effects are still the most important aspect in the development of an active portfolio strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A stylised fact in the real estate portfolio diversification literature is that sector (property-type) effects are relatively more important than regional (geographical) factors in determining property returns. Thus, for those portfolio managers who follow a top-down approach to portfolio management, they should first choose in which sectors to invest and then select the best properties in each market. However, the question arises as to whether the dominance of the sector effects relative to regional effects is constant. If not property fund managers will need to take account of regional effects in developing their portfolio strategy. We find the results show that the sector-specific factors dominate the regional-specific factors for the vast majority of the time. Nonetheless, there are periods when the regional factors are of equal or greater importance than the sector effects. In particular, the sector effects tend to dominate during volatile periods of the real estate cycle; however, during calmer periods the sector and regional effects are of equal importance. These findings suggest that the sector effects are still the most important aspect in the development of an active portfolio strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this report is part of the effort to define the landscape state and diversity indicator in the frame of COM (2006) 508 “Development of agri-environmental indicators for monitoring the integration of environmental concerns into the common agricultural policy”. The Communication classifies the indicators according to their level of development, which, for the landscape indicator is “in need of substantial improvements in order to become fully operational”. For this reason a full re-definition of the indicator has been carried out, following the initial proposal presented in the frame of the IRENA operation (“Indicator Reporting on the Integration of Environmental Concerns into Agricultural Policy”). The new proposal for the landscape state and diversity indicator is structured in three components: the first concerns the degree of naturalness, the second landscape structure, the third the societal appreciation of the rural landscape. While the first two components rely on a strong bulk of existing literature, the development of the methodology has made evident the need for further analysis of the third component, which is based on a newly proposed top-down approach. This report presents an in-depth analysis of such component of the indicator, and the effort to include a social dimension in large scale landscape assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drought is a global problem that has far-reaching impacts and especially 47 on vulnerable populations in developing regions. This paper highlights the need for a Global Drought Early Warning System (GDEWS), the elements that constitute its underlying framework (GDEWF) and the recent progress made towards its development. Many countries lack drought monitoring systems, as well as the capacity to respond via appropriate political, institutional and technological frameworks, and these have inhibited the development of integrated drought management plans or early warning systems. The GDEWS will provide a source of drought tools and products via the GDEWF for countries and regions to develop tailored drought early warning systems for their own users. A key goal of a GDEWS is to maximize the lead time for early warning, allowing drought managers and disaster coordinators more time to put mitigation measures in place to reduce the vulnerability to drought. To address this, the GDEWF will take both a top-down approach to provide global real-time drought monitoring and seasonal forecasting, and a bottom-up approach that builds upon existing national and regional systems to provide continental to global coverage. A number of challenges must be overcome, however, before a GDEWS can become a reality, including the lack of in-situ measurement networks and modest seasonal forecast skill in many regions, and the lack of infrastructure to translate data into useable information. A set of international partners, through a series of recent workshops and evolving collaborations, has made progress towards meeting these challenges and developing a global system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integration or illusion – a deviance perspective Denmark experienced one of its most successful periods of economic growth in 2004– 2008 with a tremendous reduction of unemployment, which in June 2008 was around. 1.5 percent, far below the expected level of structural unemployment. In the wake of this development the lack of utilization of migrants’ educations and skills became, once again, a core concern. The political, societal and academic debate followed to a great extent the traditional top-down approach to the problem and revolved around two axes: 1. How effective the labour market was/is to make use of migrants’ skills. 2. Whether there were patterns of over-education as expression of institutional and societal discrimination. The focus of the present study is, however, quite different: We examine the pattern of deviance in relation to labour market participation (not integration), and instead of searching for explanations for the lack of integration, we attempt to identify and explain the deviance pattern as a product of institutionally inherent possibilities and barriers on the one hand and articulating immigrants as rational actors (not victims) on the other. We argue that deviance is not only a more fruitful theoretical and analytical framework than integration and discrimination. Taking departure in empirical evidence on immigrants’ preferences and behaviour as bounded rational actors, and how they actually articulate their everyday life practical experiences, including adjustment of what they want and what they can, the deviance perspective, we believe, also reduces the theoretical and normative biases, that characterises the discrimination and integration framework, and provide more reliable explanations. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A demanda pela responsabilidade corporativa nunca foi tão grande. A necessidade de aliar a governança corporativa a atividades de controle eficientes nunca foi tão clara. Essa dissertação tem como objetivo responder a questão sobre se a abordagem Top Down do Earnings at Risk pode ser considerada como compatível às demandas da Sarbanes Oxley, e adicionalmente, um método eficiente de gerenciamento de riscos para as empresas não financeiras. Baseado nos resultados que encontramos a abordagem Top Down do Earnings at Risk não atende às demandas impostas pela Sarbanes-Oxley. Embora a SOX atente para a eficácia e não para eficiência dos controles utilizados pelas empresas, decisões gerenciais baseadas neste método podem conduzir a empresa a possíveis erros.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As primeiras regiões metropolitanas brasileiras foram instituídas de maneira vertical e autoritária como parte da estratégia de desenvolvimento nacional promovida pelo governo militar. Percebidas como instituições não-democráticas e rejeitadas como possível quarto ente federativo, as regiões metropolitanas, desde a Constituição de 1988, foram gradualmente esvaziadas dos seus propósitos originais. Em sua orfandade, os problemas socioeconômicos proliferaram e foram acentuados, e passaram a predominar relações intergovernamentais competitivas em vez de cooperativas. Um dos principais desafios enfrentados pelo modelo federalista brasileiro, em especial quando se trata destas regiões, está relacionado à necessidade de estabelecer maior cooperação e coordenação, tidas como imprescindíveis para garantir um relacionamento mais equilibrado entre os entes federativos, assim como para a efetiva implementação de políticas de enfrentamento das desigualdades e exclusão social nas aglomerações urbanas. Este trabalho analisa o Grande Recife Consórcio Metropolitano de Transportes (CMT), empresa pública multifederativa estabelecida em 2008 entre os governos municipais e estadual da Região Metropolitana de Recife (RMR). Responsável pelo planejamento, gestão e implementação compartilhada da política de transporte público coletivo na RMR, o Grande Recife se tornou realidade com a aprovação e regulamentação da Lei Federal nº 11.107 de 2005, conhecida como a Lei de Consórcios Públicos. O Grande Recife é uma experiência pioneira e inovadora, demonstrando que é possível encontrar uma maneira de superar conflitos e desafios comuns e, ao mesmo tempo, garantir a preservação da autonomia de cada ente, bem como os direitos cidadãos. Neste trabalho consideramos essa experiência de cooperação intergovernamental como um exemplo de multi-level governance (MLG), uma vez que é ilustrativa de um novo arranjo institucional democrático entre distintas esferas governamentais para a gestão compartilhada de um serviço público.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of current ultra-miniaturized devices are obtained by the top-down approach, in which nanoscale components are fabricated by cutting down larger precursors. Since this physical-engineering method is reaching its limits, especially for components below 30 nm in size, alternative strategies are necessary. Of particular appeal to chemists is the supramolecular bottom-up approach to nanotechnology, a methodology that utilizes the principles of molecular recognition to build materials and devices from molecular components. The subject of this thesis is the photophysical and electrochemical investigation of nanodevices obtained harnessing the principles of supramolecular chemistry. These systems operate in solution-based environments and are investigated at the ensemble level. The majority of the chemical systems discussed here are based on pseudorotaxanes and catenanes. Such supramolecular systems represent prototypes of molecular machines since they are capable of performing simple controlled mechanical movements. Their properties and operation are strictly related to the supramolecular interactions between molecular components (generally photoactive or electroactive molecules) and to the possibility of modulating such interactions by means of external stimuli. The main issues addressed throughout the thesis are: (i) the analysis of the factors that can affect the architecture and perturb the stability of supramolecular systems; (ii) the possibility of controlling the direction of supramolecular motions exploiting the molecular information content; (iii) the development of switchable supramolecular polymers starting from simple host-guest complexes; (iv) the capability of some molecular machines to process information at molecular level, thus behaving as logic devices; (v) the behaviour of molecular machine components in a biological-type environment; (vi) the study of chemically functionalized metal nanoparticles by second harmonic generation spectroscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MFA and LCA methodologies were applied to analyse the anthropogenic aluminium cycle in Italy with focus on historical evolution of stocks and flows of the metal, embodied GHG emissions, and potentials from recycling to provide key features to Italy for prioritizing industrial policy toward low-carbon technologies and materials. Historical trend series were collected from 1947 to 2009 and balanced with data from production, manufacturing and waste management of aluminium-containing products, using a ‘top-downapproach to quantify the contemporary in-use stock of the metal, and helping to identify ‘applications where aluminium is not yet being recycled to its full potential and to identify present and future recycling flows’. The MFA results were used as a basis for the LCA aimed at evaluating the carbon footprint evolution, from primary and electrical energy, the smelting process and the transportation, embodied in the Italian aluminium. A discussion about how the main factors, according to the Kaya Identity equation, they did influence the Italian GHG emissions pattern over time, and which are the levers to mitigate it, it has been also reported. The contemporary anthropogenic reservoirs of aluminium was estimated at about 320 kg per capita, mainly embedded within the transportation and building and construction sectors. Cumulative in-use stock represents approximately 11 years of supply at current usage rates (about 20 Mt versus 1.7 Mt/year), and it would imply a potential of about 160 Mt of CO2eq emissions savings. A discussion of criticality related to aluminium waste recovery from the transportation and the containers and packaging sectors was also included in the study, providing an example for how MFA and LCA may support decision-making at sectorial or regional level. The research constitutes the first attempt of an integrated approach between MFA and LCA applied to the aluminium cycle in Italy.