940 resultados para MODEL (Computer program language)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As rápidas alterações sociais, económicas, culturais e ambientais determinaram mudanças significativas nos estilos de vida e contribuíram para o crescimento e generalização do consumo de alimentos e refeições fora de casa. Portugal acompanha a tendência de aumento do consumo alimentar fora de casa, assim, as refeições fora de casa, que há uns anos eram um acontecimento fortuito, são hoje uma prática habitual das famílias portuguesas, não só durante a semana de trabalho, mas também nos fins-de-semana. As, visitas aos centros comerciais que se tornaram um hábito no nosso país incluem uma paragem nas Praças de Alimentação, espaços de excelência pela diversidade alimentar onde predominam as refeições de fast-food. Porém é fundamental a escolha adequada/equilibrada dos alimentos que se vão consumir. O presente trabalho procurou avaliar os hábitos e percepção dos consumidores de refeições rápidas com base numa ementa específica cujo alimento principal é o pão. Posteriormente e de acordo com as preferências de consumo procedeu-se à avaliação nutricional das escolhas. Neste estudo participaram 150 indivíduos que frequentaram as instalações de um restaurante de comida rápida situada na praça de alimentação de um centro comercial situado em Viseu. Foi aplicado um questionário de autopreenchimento, por nós elaborado dividido em 4 partes: caracterização sociodemográfica; hábitos de consumo dos inquiridos; produtos escolhidos pelos inquiridos; grau de satisfação face aos produtos escolhidos. As análises estatísticas foram efectuadas com recurso ao Programa informático Statistical Package for the Social Sciences - SPSS® for Windows, versão 22. Realizam-se testes de Qui-quadrado com simulação de Monte Carlo, considerando o nível de significância de 0,05. Com base nas escolhas mais frequentes feitas pelos inquiridos procedeu-se à avaliação nutricional dos menus recorrendo ao programa DIAL 1.19 versão 1 e quando não se encontrou informação neste utilizou-se a tabela de composição de alimentos portugueses on line (INSA, 2010). Compararam-se os valores obtidos para o Valor Calórico Total, os macronutrientes, a fibra, o colesterol e o sódio com as Doses Diárias Recomendadas. A amostra era composta por 68,7% mulheres e 31,3% homens, com uma média de idades de 29,9 ± 3 anos e, maioritariamente empregados (64,7%). O grau de instrução da maioria dos inquiridos (54,7%) era o ensino superior. Grande parte da amostra não se considera consumidora habitual de fast-food,referindo ainda efectuar frequentemente uma alimentação equilibrada. Sendo que apenas 5 % frequenta as instalações mais de uma vez por semana. De entre os produtos disponíveis, a preferência fez-se pela sandes e batata-frita, sendo o momento de maior consumo o almoçoA avaliação nutricional das escolhas preferenciais dos inquiridos mostrou que o VCT do menu que inclui água como bebida está dentro dos limites calóricos preconizados para o almoço excepção feita ao menu que inclui sandes quente de frango em pão de orégãos e sandes fria de queijo fresco que se destacam por apresentar um valor inferior ao limite mínimo recomendado. Pelo contrário, a inclusão no menu do refrigerante faz com que haja um aumento do VCT, independentemente da sandes considerada, em 18%. Uma análise detalhada mostra que estas ementas são desequilibradas, apresentando 33,3% delas valores de proteínas superiores à DDR enquanto que os valores de HC e lípidos se encontram maioritariamente dentro dos limites havendo apenas 13,3% das ementas fora desses valores. Relativamente ao aporte de fibra e de sódio 86,7% das ementas aparecem desenquadradas com valores excessivos de sódio e valores de fibra 33% abaixo do limite mínimo recomendado. Tratando-se de um estudo de caso em que apenas se inclui um único restaurante de uma praça de alimentação, que fornece ementas à base de pão (sandes) os resultados são interpretados de forma cautelosa e sem generalização. Podemos no entanto concluir, face aos resultados obtidos a necessidade de redução do teor de sal das ementas. Para além disso parece-nos fundamental, para que o consumidor possa comparar opções alimentares e tomar decisões informadas, a disponibilização da informação nutricional das ementas propostas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente trabajo empleó herramientas de hardware y software de licencia libre para el establecimiento de una estación base celular (BTS) de bajo costo y fácil implementación. Partiendo de conceptos técnicos que facilitan la instalación del sistema OpenBTS y empleando el hardware USRP N210 (Universal Software Radio Peripheral) permitieron desplegar una red análoga al estándar de telefonía móvil (GSM). Usando los teléfonos móviles como extensiones SIP (Session Initiation Protocol) desde Asterisk, logrando ejecutar llamadas entre los terminales, mensajes de texto (SMS), llamadas desde un terminal OpenBTS hacia otra operadora móvil, entre otros servicios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequence problems belong to the most challenging interdisciplinary topics of the actuality. They are ubiquitous in science and daily life and occur, for example, in form of DNA sequences encoding all information of an organism, as a text (natural or formal) or in form of a computer program. Therefore, sequence problems occur in many variations in computational biology (drug development), coding theory, data compression, quantitative and computational linguistics (e.g. machine translation). In recent years appeared some proposals to formulate sequence problems like the closest string problem (CSP) and the farthest string problem (FSP) as an Integer Linear Programming Problem (ILPP). In the present talk we present a general novel approach to reduce the size of the ILPP by grouping isomorphous columns of the string matrix together. The approach is of practical use, since the solution of sequence problems is very time consuming, in particular when the sequences are long.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this research is to synthesize structural composites designed with particular areas defined with custom modulus, strength and toughness values in order to improve the overall mechanical behavior of the composite. Such composites are defined and referred to as 3D-designer composites. These composites will be formed from liquid crystalline polymers and carbon nanotubes. The fabrication process is a variation of rapid prototyping process, which is a layered, additive-manufacturing approach. Composites formed using this process can be custom designed by apt modeling methods for superior performance in advanced applications. The focus of this research is on enhancement of Young's modulus in order to make the final composite stiffer. Strength and toughness of the final composite with respect to various applications is also discussed. We have taken into consideration the mechanical properties of final composite at different fiber volume content as well as at different orientations and lengths of the fibers. The orientation of the LC monomers is supposed to be carried out using electric or magnetic fields. A computer program is modeled incorporating the Mori-Tanaka modeling scheme to generate the stiffness matrix of the final composite. The final properties are then deduced from the stiffness matrix using composite micromechanics. Eshelby's tensor, required to calculate the stiffness tensor using Mori-Tanaka method, is calculated using a numerical scheme that determines the components of the Eshelby's tensor (Gavazzi and Lagoudas 1990). The numerical integration is solved using Gaussian Quadrature scheme and is worked out using MATLAB as well. . MATLAB provides a good deal of commands and algorithms that can be used efficiently to elaborate the continuum of the formula to its extents. Graphs are plotted using different combinations of results and parameters involved in finding these results

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several definitions exist that offer to identify the boundaries between languages and dialects, yet these distinctions are inconsistent and are often as political as they are linguistic (Chambers & Trudgill, 1998). A different perspective is offered in this thesis, by investigating how closely related linguistic varieties are represented in the brain and whether they engender similar cognitive effects as is often reported for bilingual speakers of recognised independent languages, based on the principles of Green’s (1998) model of bilingual language control. Study 1 investigated whether bidialectal speakers exhibit similar benefits in non-linguistic inhibitory control as a result of the maintenance and use of two dialects, as has been proposed for bilinguals who regularly employ inhibitory control mechanisms, in order to suppress one language while speaking the other. The results revealed virtually identical performance across all monolingual, bidialectal and bilingual participant groups, thereby not just failing to find a cognitive control advantage in bidialectal speakers over monodialectals/monolinguals, but also in bilinguals; adding to a growing body of evidence which challenges this bilingual advantage in non-linguistic inhibitory control. Study 2 investigated the cognitive representation of dialects using an adaptation of a Language Switching Paradigm to determine if the effort required to switch between dialects is similar to the effort required to switch between languages. The results closely replicated what is typically shown for bilinguals: Bidialectal speakers exhibited a symmetrical switch cost like balanced bilinguals while monodialectal speakers, who were taught to use the dialect words before the experiment, showed the asymmetrical switch cost typically displayed by second language learners. These findings augment Green’s (1998) model by suggesting that words from different dialects are also tagged in the mental lexicon, just like words from different languages, and as a consequence, it takes cognitive effort to switch between these mental settings. Study 3 explored an additional explanation for language switching costs by investigating whether changes in articulatory settings when switching between different linguistic varieties could - at least in part – be responsible for these previously reported switching costs. Using a paradigm which required participants to switch between using different articulatory settings, e.g. glottal stops/aspirated /t/ and whispers/normal phonation, the results also demonstrated the presence of switch costs, suggesting that switching between linguistic varieties has a motor task-switching component which is independent of representations in the mental lexicon. Finally, Study 4 investigated how much exposure is needed to be able to distinguish between different varieties using two novel language categorisation tasks which compared German vs Russian cognates, and Standard Scottish English vs Dundonian Scots cognates. The results showed that even a small amount of exposure (i.e. a couple of days’ worth) is required to enable listeners to distinguish between different languages, dialects or accents based on general phonetic and phonological characteristics, suggesting that the general sound template of a language variety can be represented before exact lexical representations have been formed. Overall, these results show that bidialectal use of typologically closely related linguistic varieties employs similar cognitive mechanisms as bilingual language use. This thesis is the first to explore the cognitive representations and mechanisms that underpin the use of typologically closely related varieties. It offers a few novel insights and serves as the starting point for a research agenda that can yield a more fine-grained understanding of the cognitive mechanisms that may operate when speakers use closely related varieties. In doing so, it urges caution when making assumptions about differences in the mechanisms used by individuals commonly categorised as monolinguals, to avoid potentially confounding any comparisons made with bilinguals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Knowing the experience of abuse, contextual determinants that led to the rupture of the situation and attempts to build a more harmonious future, it is essential to work sensitivities and better understand victims of domestic violence. Objectives: To understand the suffering of women victims of violence. Methods: This is an intentional sample of 21 women who were at shelter home or in the community. The data were collected by in- Documento descargado de http://www.elsevier.es el 13-10-2016 3rd World Congress of Health Research 21 terviews, guided by a script organized into four themes. The interviews were conducted with audio record, the permission of the participants were fully passed the text and analyzed as two different corpuses, depending on the context in which they occurred. The analysis was conducted using the ALCESTE computer program. The study obtained a favorable opinion of the Committee on Health and Welfare of the University of Évora. Results: From the irst sample analysis emerged ive classes. The association of the words gave the meaning of each class that we have appointed as Class 1 - Precipitating Events; Class 2 - Experience of abuse; Class 3 - Two feet in the present and looking into the future; Class 4 - The present and learning from the experience of abuse; and Class 5 - Violence in general. From the analysis of the sample in the community four classes emerged that we have appointed as Class 1 - Violence in general; Class 2 - Precipitating Events; Class 3 - abuse of experience; and class 4 - Support in the process. Conclusions: Women who are at shelter home have this experience of violence and its entire context a lot are very focused on their experiences and the future is distant and unclear. Women in the community have a more comprehensive view of the phenomenon of violence as a whole, they can decentralize to their personal experiences and recognize the importance of support in the future construction process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RWMODEL II simulates the Rescorla-Wagner model of Pavlovian conditioning. It is written in Delphi and runs under Windows 3.1 and Windows 95. The program was designed for novice and expert users and can be employed in teaching, as well as in research. It is user friendly and requires a minimal level of computer literacy but is sufficiently flexible to permit a wide range of simulations. It allows the display of empirical data, against which predictions from the model can be validated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Program compilation can be formally defined as a sequence of equivalence-preserving transformations, or refinements, from high-level language programs to assembler code, Recent models also incorporate timing properties, but the resulting formalisms are intimidatingly complex. Here we take advantage of a new, simple model of real-time refinement, based on predicate transformer semantics, to present a straightforward compilation formalism that incorporates real-time constraints. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this research was to apply model checking by using a symbolic model checker on Predicate Transition Nets (PrT Nets). A PrT Net is a formal model of information flow which allows system properties to be modeled and analyzed. The aim of this thesis was to use the modeling and analysis power of PrT nets to provide a mechanism for the system model to be verified. Symbolic Model Verifier (SMV) was the model checker chosen in this thesis, and in order to verify the PrT net model of a system, it was translated to SMV input language. A software tool was implemented which translates the PrT Net into SMV language, hence enabling the process of model checking. The system includes two parts: the PrT net editor where the representation of a system can be edited, and the translator which converts the PrT net into an SMV program.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A Geographic Information System (GIS) was used to model datasets of Leyte Island, the Philippines, to identify land which was suitable for a forest extension program on the island. The datasets were modelled to provide maps of the distance of land from cities and towns, land which was a suitable elevation and slope for smallholder forestry and land of various soil types. An expert group was used to assign numeric site suitabilities to the soil types and maps of site suitability were used to assist the selection of municipalities for the provision of extension assistance to smallholders. Modelling of the datasets was facilitated by recent developments of the ArcGIS® suite of computer programs and derivation of elevation and slope was assisted by the availability of digital elevation models (DEM) produced by the Shuttle Radar Topography (SRTM) mission. The usefulness of GIS software as a decision support tool for small-scale forestry extension programs is discussed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

MSc. Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.