878 resultados para sets of words


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new method is presented here to analyse the Peierls-Nabarro model of an edge dislocation in a rectangular plate. The analysis is based on the superposition scheme and series expansions of complex potentials. The stress field and dislocation density field on the slip plane can be expressed as the first and the second Chebyshev polynomial series respectively. Two sets of governing equations are obtained on the slip plane and outer boundary of the rectangular plate respectively. Three numerical methods are used to solve the governing equations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The implementation of various types of marine protected areas is one of several management tools available for conserving representative examples of the biological diversity within marine ecosystems in general and National Marine Sanctuaries in particular. However, deciding where and how many sites to establish within a given area is frequently hampered by incomplete knowledge of the distribution of organisms and an understanding of the potential tradeoffs that would allow planners to address frequently competing interests in an objective manner. Fortunately, this is beginning to change. Recent studies on the continental shelf of the northeastern United States suggest that substrate and water mass characteristics are highly correlated with the composition of benthic communities and may therefore, serve as proxies for the distribution of biological biodiversity. A detailed geo-referenced interpretative map of major sediment types within Stellwagen Bank National Marine Sanctuary (SBNMS) has recently been developed, and computer-aided decision support tools have reached new levels of sophistication. We demonstrate the use of simulated annealing, a type of mathematical optimization, to identify suites of potential conservation sites within SBNMS that equally represent 1) all major sediment types and 2) derived habitat types based on both sediment and depth in the smallest amount of space. The Sanctuary was divided into 3610 0.5 min2 sampling units. Simulations incorporated constraints on the physical dispersion of sampling units to varying degrees such that solutions included between one and four site clusters. Target representation goals were set at 5, 10, 15, 20, and 25 percent of each sediment type, and 10 and 20 percent of each habitat type. Simulations consisted of 100 runs, from which we identified the best solution (i.e., smallest total area) and four nearoptimal alternates. We also plotted total instances in which each sampling unit occurred in solution sets of the 100 runs as a means of gauging the variety of spatial configurations available under each scenario. Results suggested that the total combined area needed to represent each of the sediment types in equal proportions was equal to the percent representation level sought. Slightly larger areas were required to represent all habitat types at the same representation levels. Total boundary length increased in direct proportion to the number of sites at all levels of representation for simulations involving sediment and habitat classes, but increased more rapidly with number of sites at higher representation levels. There were a large number of alternate spatial configurations at all representation levels, although generally fewer among one and two versus three- and four-site solutions. These differences were less pronounced among simulations targeting habitat representation, suggesting that a similar degree of flexibility is inherent in the spatial arrangement of potential protected area systems containing one versus several sites for similar levels of habitat representation. We attribute these results to the distribution of sediment and depth zones within the Sanctuary, and to the fact that even levels of representation were sought in each scenario. (PDF contains 33 pages.)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ENGLISH: These aspects of the schooling habits of the yellowfin and skipjack tuna may be investigated by means of the logbook records of the catches of individual sets of the nets of purse-seine vessels. For both purposes it must be assumed that a set is made, in each case, on a single school of fish. The study of school sizes based on these data requires the additional assumption either that the entire school is captured or that each set captures a constant fraction of the school upon which it is made. In this paper we report on the results of such investigations based on logbook records of the purse-seine fleet. SPANISH: Estos aspectos de los hábitos gregarios de los atunes aleta amarilla y barrilete pueden ser investigados a base de los registros de bitácora en que se anotan las pescas resultantes de cada una de las operaciones con la red de encierre que realizan los barcos rederos. Para ambos propósitos hay que suponer que las operaciones se efectúan, en cada caso, en un cardumen independiente. El estudio de los tamaños de los cardúmenes o manchas, a base de estos datos, requiere una suposición adicional: que el cardumen entero es capturado o, en su defecto, que en cada operación con la red se pesca una fracción constante de la mancha objeto de la pesca. En el presente artículo damos cuenta de los resultados de dichas investigaciones basadas en los registros de bitácora que lleva la flota de embarcaciones que utilizan redes de encierre. (PDF contains 47 pages.)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Accurate and fast decoding of speech imagery from electroencephalographic (EEG) data could serve as a basis for a new generation of brain computer interfaces (BCIs), more portable and easier to use. However, decoding of speech imagery from EEG is a hard problem due to many factors. In this paper we focus on the analysis of the classification step of speech imagery decoding for a three-class vowel speech imagery recognition problem. We empirically show that different classification subtasks may require different classifiers for accurately decoding and obtain a classification accuracy that improves the best results previously published. We further investigate the relationship between the classifiers and different sets of features selected by the common spatial patterns method. Our results indicate that further improvement on BCIs based on speech imagery could be achieved by carefully selecting an appropriate combination of classifiers for the subtasks involved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerous transcription factors self-assemble into different order oligomeric species in a way that is actively regulated by the cell. Until now, no general functional role has been identified for this widespread process. Here, we capture the effects of modulated self-assembly in gene expression with a novel quantitative framework. We show that this mechanism provides precision and flexibility, two seemingly antagonistic properties, to the sensing of diverse cellular signals by systems that share common elements present in transcription factors like p53, NF-kappa B, STATs, Oct and RXR. Applied to the nuclear hormone receptor RXR, this framework accurately reproduces a broad range of classical, previously unexplained, sets of gene expression data and corroborates the existence of a precise functional regime with flexible properties that can be controlled both at a genome-wide scale and at the individual promoter level.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Experimental fishing operations with driftnets were conducted in Lagos coastal waters with a view to finding out appropriate gear for effective exploitation of sharks and other pelagic fish species that are not normally caught in trawls. The design and fabrication of the driftnets as well as the fishing trials were undertaken between May, 1977 and April, 1978. Six baited driftnet sets of equal panels with three stretched mesh sizes of 190.5mm; 228.6mm and 241.3mm were used. Analysis were carried out on species composition of catches, weight and number of species/group of fish, catch efficiency of the driftnets as well as on operating costs and financial returns

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Over many years, it has been assumed that enzymes work either in an isolated way, or organized in small catalytic groups. Several studies performed using "metabolic networks models'' are helping to understand the degree of functional complexity that characterizes enzymatic dynamic systems. In a previous work, we used "dissipative metabolic networks'' (DMNs) to show that enzymes can present a self-organized global functional structure, in which several sets of enzymes are always in an active state, whereas the rest of molecular catalytic sets exhibit dynamics of on-off changing states. We suggested that this kind of global metabolic dynamics might be a genuine and universal functional configuration of the cellular metabolic structure, common to all living cells. Later, a different group has shown experimentally that this kind of functional structure does, indeed, exist in several microorganisms. Methodology/Principal Findings: Here we have analyzed around 2.500.000 different DMNs in order to investigate the underlying mechanism of this dynamic global configuration. The numerical analyses that we have performed show that this global configuration is an emergent property inherent to the cellular metabolic dynamics. Concretely, we have found that the existence of a high number of enzymatic subsystems belonging to the DMNs is the fundamental element for the spontaneous emergence of a functional reactive structure characterized by a metabolic core formed by several sets of enzymes always in an active state. Conclusions/Significance: This self-organized dynamic structure seems to be an intrinsic characteristic of metabolism, common to all living cellular organisms. To better understand cellular functionality, it will be crucial to structurally characterize these enzymatic self-organized global structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Various families of exact solutions to the Einstein and Einstein-Maxwell field equations of General Relativity are treated for situations of sufficient symmetry that only two independent variables arise. The mathematical problem then reduces to consideration of sets of two coupled nonlinear differential equations.

The physical situations in which such equations arise include: a) the external gravitational field of an axisymmetric, uncharged steadily rotating body, b) cylindrical gravitational waves with two degrees of freedom, c) colliding plane gravitational waves, d) the external gravitational and electromagnetic fields of a static, charged axisymmetric body, and e) colliding plane electromagnetic and gravitational waves. Through the introduction of suitable potentials and coordinate transformations, a formalism is presented which treats all these problems simultaneously. These transformations and potentials may be used to generate new solutions to the Einstein-Maxwell equations from solutions to the vacuum Einstein equations, and vice-versa.

The calculus of differential forms is used as a tool for generation of similarity solutions and generalized similarity solutions. It is further used to find the invariance group of the equations; this in turn leads to various finite transformations that give new, physically distinct solutions from old. Some of the above results are then generalized to the case of three independent variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methodology for the preparation of allenes from propargylic hydrazine precursors under mild conditions is described. Oxidation of the propargylic hydrazines, which can be readily prepared from propargylic alcohols, with either of two azo oxidants, diethyl azodicarboxylate (DEAD) or 4-methyl 1,2-triazoline-3,5-dione (MTAD), effects conversion to the allenes, presumably via sigmatropic rearrangement of a monoalkyl diazene intermediate. This rearrangement is demonstrated to proceed with essentially complete stereospecificity. The application of this methodology to the preparation of other allenes, including two that are notable for their reactivity and thermal instability, is also described.

The structural and mechanistic study of a monoalkyl diazene intermediate in the oxidative transformation of propargylic hydrazines to allenes is described. The use of long-range heteronuclear NMR coupling constants for assigning monoalkyl diazene stereochemistry (E vs Z) is also discussed. Evidence is presented that all known monoalkyl diazenes are the E isomers, and the erroneous assignment of stereochemistry in the previous report of the preparation of (Z)-phenyldiazene is discussed.

The synthesis, characterization, and reactivity of 1,6-didehydro[10]annulene are described. This molecule has been recognized as an interesting synthetic target for over 40 years and represents the intersection of two sets of extensively studied molecules: nonbenzenoid aromatic compounds and molecules containing sterically compressed π-systems.The formation of 1,5-dehydronaphthalene from 1 ,6-didehydro[10]annulene is believed to be the prototype for cycloaromatizations that produce 1,4-dehydroaromatic species with the radical centers disposed anti about the newly formed single bond. The aromaticity of this annulene and the facility of its cycloaromatization are also analyzed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.

In this thesis we address the above issues by developing a new methodology, called "analysis-aware" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis has two major parts. The first part of the thesis will describe a high energy cosmic ray detector -- the High Energy Isotope Spectrometer Telescope (HEIST). HEIST is a large area (0.25 m2sr) balloon-borne isotope spectrometer designed to make high-resolution measurements of isotopes in the element range from neon to nickel (10 ≤ Z ≤ 28) at energies of about 2 GeV/nucleon. The instrument consists of a stack of 12 NaI(Tl) scintilla tors, two Cerenkov counters, and two plastic scintillators. Each of the 2-cm thick NaI disks is viewed by six 1.5-inch photomultipliers whose combined outputs measure the energy deposition in that layer. In addition, the six outputs from each disk are compared to determine the position at which incident nuclei traverse each layer to an accuracy of ~2 mm. The Cerenkov counters, which measure particle velocity, are each viewed by twelve 5-inch photomultipliers using light integration boxes.

HEIST-2 determines the mass of individual nuclei by measuring both the change in the Lorentz factor (Δγ) that results from traversing the NaI stack, and the energy loss (ΔΕ) in the stack. Since the total energy of an isotope is given by Ε = γM, the mass M can be determined by M = ΔΕ/Δγ. The instrument is designed to achieve a typical mass resolution of 0.2 amu.

The second part of this thesis presents an experimental measurement of the isotopic composition of the fragments from the breakup of high energy 40Ar and 56Fe nuclei. Cosmic ray composition studies rely heavily on semi-empirical estimates of the cross-sections for the nuclear fragmentation reactions which alter the composition during propagation through the interstellar medium. Experimentally measured yields of isotopes from the fragmentation of 40Ar and 56Fe are compared with calculated yields based on semi-empirical cross-section formulae. There are two sets of measurements. The first set of measurements, made at the Lawrence Berkeley Laboratory Bevalac using a beam of 287 MeV/nucleon 40Ar incident on a CH2 target, achieves excellent mass resolution (σm ≤ 0.2 amu) for isotopes of Mg through K using a Si(Li) detector telescope. The second set of measurements, also made at the Lawrence Berkeley Laboratory Bevalac, using a beam of 583 MeV/nucleon 56FeFe incident on a CH2 target, resolved Cr, Mn, and Fe fragments with a typical mass resolution of ~ 0.25 amu, through the use of the Heavy Isotope Spectrometer Telescope (HIST) which was later carried into space on ISEE-3 in 1978. The general agreement between calculation and experiment is good, but some significant differences are reported here.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ambiguity function was employed as a merit function to design an optical system with a high depth of focus. The ambiguity function with the desired enlarged-depth-of-focus characteristics was obtained by using a properly designed joint filter to modify the ambiguity function of the original pupil in the phase-space domain. From the viewpoint of the filter theory, we roughly propose that the constraints of the spatial filters that are used to enlarge the focal depth must be satisfied. These constraints coincide with those that appeared in the previous literature on this topic. Following our design procedure, several sets of apodizers were synthesized, and their performances in the defocused imagery were compared with each other and with other previous designs. (c) 2005 Optical Society of America.