878 resultados para Observational techniques and algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensity data of the title complex were collected at a low temperature of -90-degrees-C. The compound crystallizes in the monoclinic space group P2(1)/n, a = 17.504(2), b = 27.323 (5), c = 21.616(4) angstrom, beta = 104.49 (2)degrees, Z = 4. The structure was solved by Patterson and Fourier techniques and refined by least-squares to an R = 0.088 for 8320 independent reflections. The central Pr ion is bonded to eight oxygen atoms from two molybdosilicic heteropoly ligands to form a square antiprism. The Pr-O average distance is 2.44 (2) angstrom. Both molybdosilicic heteropoly ligands are of a defective alpha-Keggin structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexes named in the title (eta-5-C9H7)3Ln.OC4H8 (Ln = Nd, Gd, Er) were synthesized by the reaction of anhydrous lanthanide trichlorides with indenyl potassium and cyclooctadienyl potassium (1:2:1 molar ratio) in THF. The complexes were characterized by elemental analysis, infrared and H-1-NMR spectroscopy, and mass spectrometry. In addition, the crystal structures of (eta-5-C9H7)3Nd.OC4H8 (1) and (eta-5-C9H7)3Gd.OC4H8 (2) were determined by an X-ray diffraction study. Complexes 1 and 2 belong to hexagonal space group P6(3) with unit cell parameters a = b = 11.843(3), c = 10.304(4) angstrom, V = 1251.7(9) angstrom-3, D(c) = 1.49 g.cm-3, Z = 2 for 1, and a = b = 11.805(2), c = 10.236(2) angstrom, V = 1235.4(6) angstrom-3 D(c) = 1.54 g.cm-3, Z = 2 for 2. The structures were solved by Patterson and Fourier techniques and refined by least-squares to final discrepancy indices of R = 0.049, R(w) = 0.053 using 925 independent reflections with I greater-than-or-equal-to 3-sigma(I) for 1, and R = 0.023, R(w) = 0.025 using 1327 independent reflections with I greater-than-or-equal-to 3-sigma(I) for 2. Coordination numbers for Nd3+ and Gd3+ are 10; the average bond lengths Nd-O and Gd-O are 2.557(21) and 2.459(13) angstrom, respectively. The structural studies showed the complexes to have 3-fold symmetry, but the THF molecule has no such symmetry; consequently the arrangement of carbon atoms in the THF molecule are disordered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis introduces elements of a theory of design activity and a computational framework for developing design systems. The theory stresses the opportunistic nature of designing and the complementary roles of focus and distraction, the interdependence of evaluation and generation, the multiplicity of ways of seeing over the history of a design session versus the exclusivity of a given way of seeing over an arbitrarily short period, and the incommensurability of criteria used to evaluate a design. The thesis argues for a principle based rather than rule based approach to designing documents. The Discursive Generator is presented as a computational framework for implementing specific design systems, and a simple system for arranging blocks according to a set of formal principles is developed by way of illustration. Both shape grammars and constraint based systems are used to contrast current trends in design automation with the discursive approach advocated in the thesis. The Discursive Generator is shown to have some important properties lacking in other types of systems, such as dynamism, robustness and the ability to deal with partial designs. When studied in terms of a search metaphor, the Discursive Generator is shown to exhibit behavior which is radically different from some traditional search techniques, and to avoid some of the well-known difficulties associated with them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes a paradigm for combining associational and causal reasoning to achieve efficient and robust problem-solving behavior. The Generate, Test and Debug (GTD) paradigm generates initial hypotheses using associational (heuristic) rules. The tester verifies hypotheses, supplying the debugger with causal explanations for bugs found if the test fails. The debugger uses domain-independent causal reasoning techniques to repair hypotheses, analyzing domain models and the causal explanations produced by the tester to determine how to replace faulty assumptions made by the generator. We analyze the strengths and weaknesses of associational and causal reasoning techniques, and present a theory of debugging plans and interpretations. The GTD paradigm has been implemented and tested in the domains of geologic interpretation, the blocks world, and Tower of Hanoi problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

R. Zwiggelaar, C.R. Bull, M.J. Mooney and S. Czarnes, 'The detection of 'soft' materials by selective energy xray transmission imaging and computer tomography', Journal of Agricultural Engineering Research 66 (3), 203-212 (1997)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hutchison, K.; Alexander, N.; Quinn, B.; and Doherty, A. M. (2007). Internationalization motives and facilitating factors: Qualitative evidence from smaller specialist retailers. Journal of International Marketing. 15(3), pp.96-122 RAE2008

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wydział Fizyki: Instytut Obserwatorium Astronomiczne

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monografia apresentada à Universidade Fernando Pessoa para obtenção do grau de Licenciada em Medicina Dentária

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monografia apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Licenciada em Medicina Dentária

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ImageRover is a search by image content navigation tool for the world wide web. The staggering size of the WWW dictates certain strategies and algorithms for image collection, digestion, indexing, and user interface. This paper describes two key components of the ImageRover strategy: image digestion and relevance feedback. Image digestion occurs during image collection; robots digest the images they find, computing image decompositions and indices, and storing this extracted information in vector form for searches based on image content. Relevance feedback occurs during index search; users can iteratively guide the search through the selection of relevant examples. ImageRover employs a novel relevance feedback algorithm to determine the weighted combination of image similarity metrics appropriate for a particular query. ImageRover is available and running on the web site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For pt. I see ibid., vol. 44, p. 927-36 (1997). In a digital communications system, data are transmitted from one location to another by mapping bit sequences to symbols, and symbols to sample functions of analog waveforms. The analog waveform passes through a bandlimited (possibly time-varying) analog channel, where the signal is distorted and noise is added. In a conventional system the analog sample functions sent through the channel are weighted sums of one or more sinusoids; in a chaotic communications system the sample functions are segments of chaotic waveforms. At the receiver, the symbol may be recovered by means of coherent detection, where all possible sample functions are known, or by noncoherent detection, where one or more characteristics of the sample functions are estimated. In a coherent receiver, synchronization is the most commonly used technique for recovering the sample functions from the received waveform. These sample functions are then used as reference signals for a correlator. Synchronization-based coherent receivers have advantages over noncoherent receivers in terms of noise performance, bandwidth efficiency (in narrow-band systems) and/or data rate (in chaotic systems). These advantages are lost if synchronization cannot be maintained, for example, under poor propagation conditions. In these circumstances, communication without synchronization may be preferable. The theory of conventional telecommunications is extended to chaotic communications, chaotic modulation techniques and receiver configurations are surveyed, and chaotic synchronization schemes are described

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with several aspects of the chemistry of iron compounds. The preparation (with particular emphasis on coprecipitation and sol-gel techniques) and processing of ferrites are discussed. Chapter 2 describes the synthesis of Ni-Zn ferrites with various compositions by three methods. These methods include coprecipitation and sol-gel techniques. The Ni-Zn ferrites were characterised by powder X-ray diffactometry (PXRD), scanning electron microscopy (SEM), vibrating sample magnetometry (VSM), Mössbauer spectroscopy and resistivity measurements. The results for the corresponding ferrites prepared by each method are compared. Chapter 3 reports the sol-gel preparation of a lead borosilicate glass and its addition to Ni-Zn ferrites prepared by the sol-gel method in Chapter 2. The glass-ferrites formed were analysed by the same techniques employed in Chapter 2. Alterations in the microstructure, magnetic and electronic properties of the ferrites due to glass addition are described. Chapter 4 introduces compounds containing Fe-O-B, Fe-O-Si or B-O-Si linkages. The synthesis and characterisation of compounds containing Fe-O-B units are described. The structure of [Fe(SALEN)]2O.CH2Cl2 (17), used in attempts to prepare compounds with Fe-O-Si bonds, was determined by X-ray crystallography. Chapter 4 also details the synthesis of three new borosilicate compounds containing ferrocenyl groups, i.e. [FcBO)2(OSiBut2)2] (19), [(FcBO)2(OSiPh2)2] (20) and [FcBOSiPh3] (21). The structure of (19) was determined by X-ray Crystallographic analysis. Chapter 5 reviews the intercalation properties of the layered host compound iron oxychloride (FeOCI). Intercalation compounds prepared with the microwave dielectric heating technique are also discussed. The syntheses of intercalation compounds by the microwave method with FeOCI as host and ferrocene, ferrocenylboronic acid and 4-aminopyridine as guest species are described. Characterisation of these compounds by powder X-ray diffractometry (PXRD) and M{ssbauer spectroscopy is reported. The attempted synthesis of an intercalation compound with the borosilicate compound (19) as guest species is discussed. Appendices A-E describe the theory and instrumentation involved in powder X-ray diffractometry (PXRD), scanning electron microscopy (SEM0, vibrating sample magnetometry (VSM), Mössbauer spectroscopy and electrical resistivity measurements, respectively. Appendix F details the attempted syntheses of compounds with Fe-O-B and Fe-O-Si linkages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.