970 resultados para direct operational calculus
Resumo:
So far, the majority of reports on on-line measurement considered soil properties with direct spectral responses in near infrared spectroscopy (NIRS). This work reports on the results of on-line measurement of soil properties with indirect spectral responses, e.g. pH, cation exchange capacity (CEC), exchangeable calcium (Caex) and exchangeable magnesium (Mgex) in one field in Bedfordshire in the UK. The on-line sensor consisted of a subsoiler coupled with an AgroSpec mobile, fibre type, visible and near infrared (vis–NIR) spectrophotometer (tec5 Technology for Spectroscopy, Germany), with a measurement range 305–2200 nm to acquire soil spectra in diffuse reflectance mode. General calibration models for the studied soil properties were developed with a partial least squares regression (PLSR) with one-leave-out cross validation, using spectra measured under non-mobile laboratory conditions of 160 soil samples collected from different fields in four farms in Europe, namely, Czech Republic, Denmark, Netherland and UK. A group of 25 samples independent from the calibration set was used as independent validation set. Higher accuracy was obtained for laboratory scanning as compared to on-line scanning of the 25 independent samples. The prediction accuracy for the laboratory and on-line measurements was classified as excellent/very good for pH (RPD = 2.69 and 2.14 and r2 = 0.86 and 0.78, respectively), and moderately good for CEC (RPD = 1.77 and 1.61 and r2 = 0.68 and 0.62, respectively) and Mgex (RPD = 1.72 and 1.49 and r2 = 0.66 and 0.67, respectively). For Caex, very good accuracy was calculated for laboratory method (RPD = 2.19 and r2 = 0.86), as compared to the poor accuracy reported for the on-line method (RPD = 1.30 and r2 = 0.61). The ability of collecting large number of data points per field area (about 12,800 point per 21 ha) and the simultaneous analysis of several soil properties without direct spectral response in the NIR range at relatively high operational speed and appreciable accuracy, encourage the recommendation of the on-line measurement system for site specific fertilisation.
Resumo:
Direct Steam Generation (DSG) in Linear Fresnel (LF) solar collectors is being consolidated as a feasible technology for Concentrating Solar Power (CSP) plants. The competitiveness of this technology relies on the following main features: water as heat transfer fluid (HTF) in Solar Field (SF), obtaining high superheated steam temperatures and pressures at turbine inlet (500ºC and 90 bar), no heat tracing required to avoid HTF freezing, no HTF degradation, no environmental impacts, any heat exchanger between SF and Balance Of Plant (BOP), and low cost installation and maintenance. Regarding to LF solar collectors, were recently developed as an alternative to Parabolic Trough Collector (PTC) technology. The main advantages of LF are: the reduced collector manufacturing cost and maintenance, linear mirrors shapes versus parabolic mirror, fixed receiver pipes (no ball joints reducing leaking for high pressures), lower susceptibility to wind damages, and light supporting structures allowing reduced driving devices. Companies as Novatec, Areva, Solar Euromed, etc., are investing in LF DSG technology and constructing different pilot plants to demonstrate the benefits and feasibility of this solution for defined locations and conditions (Puerto Errado 1 and 2 in Murcia Spain, Lidellin Newcastle Australia, Kogran Creek in South West Queensland Australia, Kimberlina in Bakersfield California USA, Llo Solar in Pyrénées France,Dhursar in India,etc). There are several critical decisions that must be taken in order to obtain a compromise and optimization between plant performance, cost, and durability. Some of these decisions go through the SF design: proper thermodynamic operational parameters, receiver material selection for high pressures, phase separators and recirculation pumps number and location, pipes distribution to reduce the amount of tubes (reducing possible leaks points and transient time, etc.), etc. Attending to these aspects, the correct design parameters selection and its correct assessment are the main target for designing DSG LF power plants. For this purpose in the recent few years some commercial software tools were developed to simulatesolar thermal power plants, the most focused on LF DSG design are Thermoflex and System Advisor Model (SAM). Once the simulation tool is selected,it is made the study of the proposed SFconfiguration that constitutes the main innovation of this work, and also a comparison with one of the most typical state-of-the-art configuration. The transient analysis must be simulated with high detail level, mainly in the BOP during start up, shut down, stand by, and partial loads are crucial, to obtain the annual plant performance. An innovative SF configurationwas proposed and analyzed to improve plant performance. Finally it was demonstrated thermal inertia and BOP regulation mode are critical points in low sun irradiation day plant behavior, impacting in annual performance depending on power plant location.
Deriving the full-reducing Krivine machine from the small-step operational semantics of normal order
Resumo:
We derive by program transformation Pierre Crégut s full-reducing Krivine machine KN from the structural operational semantics of the normal order reduction strategy in a closure-converted pure lambda calculus. We thus establish the correspondence between the strategy and the machine, and showcase our technique for deriving full-reducing abstract machines. Actually, the machine we obtain is a slightly optimised version that can work with open terms and may be used in implementations of proof assistants.
Resumo:
Esta tesis estudia la reducción plena (‘full reduction’ en inglés) en distintos cálculos lambda. 1 En esencia, la reducción plena consiste en evaluar los cuerpos de las funciones en los lenguajes de programación funcional con ligaduras. Se toma el cálculo lambda clásico (i.e., puro y sin tipos) como el sistema formal que modela el paradigma de programación funcional. La reducción plena es una técnica fundamental cuando se considera a los programas como datos, por ejemplo para la optimización de programas mediante evaluación parcial, o cuando algún atributo del programa se representa a su vez por un programa, como el tipo en los demostradores automáticos de teoremas actuales. Muchas semánticas operacionales que realizan reducción plena tienen naturaleza híbrida. Se introduce formalmente la noción de naturaleza híbrida, que constituye el hilo conductor de todo el trabajo. En el cálculo lambda la naturaleza híbrida se manifiesta como una ‘distinción de fase’ en el tratamiento de las abstracciones, ya sean consideradas desde fuera o desde dentro de si mismas. Esta distinción de fase conlleva una estructura en capas en la que una semántica híbrida depende de una o más semánticas subsidiarias. Desde el punto de vista de los lenguajes de programación, la tesis muestra como derivar, mediante técnicas de transformación de programas, implementaciones de semánticas operacionales que reducen plenamente a partir de sus especificaciones. Las técnicas de transformación de programas consisten en transformaciones sintácticas que preservan la equivalencia semántica de los programas. Se ajustan las técnicas de transformación de programas existentes para trabajar con implementaciones de semánticas híbridas. Además, se muestra el impacto que tiene la reducción plena en las implementaciones que utilizan entornos. Los entornos son un ingrediente fundamental en las implementaciones realistas de una máquina abstracta. Desde el punto de vista de los sistemas formales, la tesis desvela una teoría novedosa para el cálculo lambda con paso por valor (‘call-by-value lambda calculus’ en inglés) que es consistente con la reducción plena. Dicha teoría induce una noción de equivalencia observacional que distingue más puntos que las teorías existentes para dicho cálculo. Esta contribución ayuda a establecer una ‘teoría estándar’ en el cálculo lambda con paso por valor que es análoga a la ‘teoría estándar’ del cálculo lambda clásico propugnada por Barendregt. Se presentan resultados de teoría de la demostración, y se sugiere como abordar el estudio de teoría de modelos. ABSTRACT This thesis studies full reduction in lambda calculi. In a nutshell, full reduction consists in evaluating the body of the functions in a functional programming language with binders. The classical (i.e., pure untyped) lambda calculus is set as the formal system that models the functional paradigm. Full reduction is a prominent technique when programs are treated as data objects, for instance when performing optimisations by partial evaluation, or when some attribute of the program is represented by a program itself, like the type in modern proof assistants. A notable feature of many full-reducing operational semantics is its hybrid nature, which is introduced and which constitutes the guiding theme of the thesis. In the lambda calculus, the hybrid nature amounts to a ‘phase distinction’ in the treatment of abstractions when considered either from outside or from inside themselves. This distinction entails a layered structure in which a hybrid semantics depends on one or more subsidiary semantics. From a programming languages standpoint, the thesis shows how to derive implementations of full-reducing operational semantics from their specifications, by using program transformations techniques. The program transformation techniques are syntactical transformations which preserve the semantic equivalence of programs. The existing program transformation techniques are adjusted to work with implementations of hybrid semantics. The thesis also shows how full reduction impacts the implementations that use the environment technique. The environment technique is a key ingredient of real-world implementations of abstract machines which helps to circumvent the issue with binders. From a formal systems standpoint, the thesis discloses a novel consistent theory for the call-by-value variant of the lambda calculus which accounts for full reduction. This novel theory entails a notion of observational equivalence which distinguishes more points than other existing theories for the call-by-value lambda calculus. This contribution helps to establish a ‘standard theory’ in that calculus which constitutes the analogous of the ‘standard theory’ advocated by Barendregt in the classical lambda calculus. Some prooftheoretical results are presented, and insights on the model-theoretical study are given.
Resumo:
One of the main concerns when conducting a dam test is the acute determination of the hydrograph for a specific flood event. The use of 2D direct rainfall hydraulic mathematical models on a finite elements mesh, combined with the efficiency of vector calculus that provides CUDA (Compute Unified Device Architecture) technology, enables nowadays the simulation of complex hydrological models without the need for terrain subbasin and transit splitting (as in HEC-HMS). Both the Spanish PNOA (National Plan of Aereal Orthophotography) Digital Terrain Model GRID with a 5 x 5 m accuracy and the CORINE GIS Land Cover (Coordination of INformation of the Environment) that allows assessment of the ground roughness, provide enough data to easily build these kind of models
Resumo:
In distinction to single-stranded anticodons built of G, C, A, and U bases, their presumable double-stranded precursors at the first three positions of the acceptor stem are composed almost invariably of G-C and C-G base pairs. Thus, the “second” operational RNA code responsible for correct aminoacylation seems to be a (G,C) code preceding the classic genetic code. Although historically rooted, the two codes were destined to diverge quite early. However, closer inspection revealed that two complementary catalytic domains of class I and class II aminoacyl-tRNA synthetases (aaRSs) multiplied by two, also complementary, G2-C71 and C2-G71 targets in tRNA acceptors, yield four (2 × 2) different modes of recognition. It appears therefore that the core four-column organization of the genetic code, associated with the most conservative central base of anticodons and codons, was in essence predetermined by these four recognition modes of the (G,C) operational code. The general conclusion follows that the genetic code per se looks like a “frozen accident” but only beyond the “2 × 2 = 4” scope. The four primordial modes of tRNA–aaRS recognition are amenable to direct experimental verification.
Resumo:
Operational capabilities são caracterizadas como um recurso interno da firma e fonte de vantagem competitiva. Porém, a literatura de estratégia de operações fornece uma definição constitutiva inadequada para as operational capabilities, desconsiderando a relativização dos diferentes contextos, a limitação da base empírica, e não explorando adequadamente a extensa literatura sobre práticas operacionais. Quando as práticas operacionais são operacionalizadas no ambiente interno da firma, elas podem ser incorporadas as rotinas organizacionais, e através do conhecimento tácito da produção se transformar em operational capabilities, criando assim barreiras à imitação. Apesar disso, poucos são os pesquisadores que exploram as práticas operacionais como antecedentes das operational capabilities. Baseado na revisão da literatura, nós investigamos a natureza das operational capabilities; a relação entre práticas operacionais e operational capabilities; os tipos de operational capabilities que são caracterizadas no ambiente interno da firma; e o impacto das operational capabilities no desempenho operacional. Nós conduzimos uma pesquisa de método misto. Na etapa qualitativa, nós conduzimos estudos de casos múltiplos com quatro firmas, duas multinacionais americanas que operam no Brasil, e duas firmas brasileiras. Nós coletamos os dados através de entrevistas semi-estruturadas com questões semi-abertas. Elas foram baseadas na revisão da literatura sobre práticas operacionais e operational capabilities. As entrevistas foram conduzidas pessoalmente. No total 73 entrevistas foram realizadas (21 no primeiro caso, 18 no segundo caso, 18 no terceiro caso, e 16 no quarto caso). Todas as entrevistas foram gravadas e transcritas literalmente. Nós usamos o sotware NVivo. Na etapa quantitativa, nossa amostra foi composta por 206 firmas. O questionário foi criado a partir de uma extensa revisão da literatura e também a partir dos resultados da fase qualitativa. O método Q-sort foi realizado. Um pré-teste foi conduzido com gerentes de produção. Foram realizadas medidas para reduzir Variância de Método Comum. No total dez escalas foram utilizadas. 1) Melhoria Contínua; 2) Gerenciamento da Informação; 3) Aprendizagem; 4) Suporte ao Cliente; 5) Inovação; 6) Eficiência Operacional; 7) Flexibilidade; 8) Customização; 9) Gerenciamento dos Fornecedores; e 10) Desempenho Operacional. Nós usamos análise fatorial confirmatória para confirmar a validade de confiabilidade, conteúdo, convergente, e discriminante. Os dados foram analisados com o uso de regressões múltiplas. Nossos principais resultados foram: Primeiro, a relação das práticas operacionais como antecedentes das operational capabilities. Segundo, a criação de uma tipologia dividida em dois construtos. O primeiro construto foi chamado de Standalone Capabilities. O grupo consiste de zero order capabilities tais como Suporte ao Cliente, Inovação, Eficiência Operacional, Flexibilidade, e Gerenciamento dos Fornecedores. Estas operational capabilities têm por objetivo melhorar os processos da firma. Elas têm uma relação direta com desempenho operacional. O segundo construto foi chamado de Across-the-Board Capabilities. Ele é composto por first order capabilities tais como Aprendizagem Contínua e Gerenciamento da Informação. Estas operational capabilities são consideradas dinâmicas e possuem o papel de reconfigurar as Standalone Capabilities.
Resumo:
The International Cooperation Agency (identified in this article as IDEA) working in Colombia is one of the most important in Colombian society with programs that support gender rights, human rights, justice and peace, scholarships, aboriginal population, youth, afro descendants population, economic development in communities, and environmental development. The identified problem is based on the diversified offer of services, collaboration and social intervention which requires diverse groups of people with multiple agendas, ways to support their mandates, disciplines, and professional competences. Knowledge creation and the growth and sustainability of the organization can be in danger because of a silo culture and the resulting reduced leverage of the separate group capabilities. Organizational memory is generally formed by the tacit knowledge of the organization members, given the value of accumulated experience that this kind of social work implies. Its loss is therefore a strategic and operational risk when most problem interventions rely on direct work in the socio-economic field and living real experiences with communities. The knowledge management solution presented in this article starts first, with the identification of the people and groups concerned and the creation of a knowledge map as a means to strengthen the ties between organizational members; second, by introducing a content management system designed to support the documentation process and knowledge sharing process; and third, introducing a methodology for the adaptation of a Balanced Scorecard based on the knowledge management processes. These three main steps lead to a knowledge management “solution” that has been implemented in the organization, comprising three components: a knowledge management system, training support and promotion of cultural change.
Resumo:
This thesis is an examination of organisational issues faced by Third Sector organisations which undertake nonviolent direct action. A case study methodology is employed and data gathered from four organisations: Earth First!; genetiX Snowball; Greenpeace; and Trident Ploughshares. The argument commences with a review of the literature which shows that little is known of the organising of nonviolent direct action. Operational definitions of 'organisation' and 'nonviolent direct action' are drawn from the literature. 'Organisation' is conceptualised using new institutionalism. 'Nonviolent direct action' is conceptualised using new social movement theory. These concepts inform the case study methodology in the choice of case, the organisations selected and the data gathering tools. Most data were gathered by semi-structured interview and participant observation. The research findings result from theory-building arising from thick descriptions of the case in the four organisations. The findings suggest that nonviolent direct action is qualitatively different from terrorism or violence. Although there is much diversity in philosophies of nonviolence, the practice of nonviolent direct action has much in common across the four organisations. The argument is that nonviolent direct action is an institution. The findings also suggest that new institutionalism is a fruitful approach to studies of these organisations. Along with nonviolent direct action, three other institutions are identified: 'rules'; consensus decision-making; and 'affinity groups'. An unanticipated finding is how the four organisations are instances of innovation. Tentative theory is developed which brings together the seemingly incompatible concepts of institutions and innovation. The theory suggests preconditions and then stages in the development of new organisational forms in new social movements: innovation. The three pre-conditions are: the existence of an institutional field; an 'institution-broker' with access to different domains; and a shared 'problem' to resolve. The three stages are: unlocking existing knowledge and practice; bridging different domains of practice or different fields to add, develop or translocate those practices; and establishing those practices within their new combinations or novel locations. Participants are able to move into and between these new organisational forms because they consist of familiar and habitual institutional behaviour.
Resumo:
We report on the operational parameters that are required to fabricate buried, microstructured waveguides in a z-cut lithium niobate crystal by the method of direct femtosecond laser inscription using a highrepetition-rate, chirped-pulse oscillator system. Refractive index contrasts as high as −0.0127 have been achieved for individual modification tracks. The results pave the way for developing microstructured WGs with low-loss operation across a wide spectral range, extending into the mid-infrared region up to the end of the transparency range of the host material.
Resumo:
2000 Mathematics Subject Classification: 26A33 (main), 44A40, 44A35, 33E30, 45J05, 45D05
Resumo:
Иван Христов Димовски, Юлиан Цанков Цанков - Построени са директни операционни смятания за функции u(x, y, t), непрекъснати в област от вида D = [0, a] × [0, b] × [0, ∞). Наред с класическата дюамелова конволюция, построението използва и две некласически конволюции за операторите ∂2x и ∂2y. Тези три едномерни конволюции се комбинират в една тримерна конволюция u ∗ v в C(D). Вместо подхода на Я. Микусински, основаващ се на конволюционни частни, се развива алтернативен подход с използване на мултипликаторните частни на конволюционната алгебра (C(D), ∗).
Resumo:
The links between operational practices and performance are well studied in the literature, both theoretically and empirically. However, mostly internal factors are inspected more closely as the basis of operational performance, even if the impact of external, environmental factors is often emphasized. Our research fills a part of this existing gap in the literature. We examine how two environmental factors, market dynamism and competition impact the use of some operational practices (such as quality improvement, product development, automation, etc.) and the resulting operations and business performance. The method of path analysis is used. Data were acquired through an international survey (IMSS – International Manufacturing Strategy Survey), which was executed in 2005, in 23 participating countries in so called "innovative" industries (ISIC 28-35) with a sample of 711 firms. Results show that both market dynamism and competition have large impact on business performance, but the indirect effects, through operations practices are rather weak compared to direct ones. The most influential practices are from the area of process and control, and quality management.
Resumo:
Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.