793 resultados para objective modality
Resumo:
The complexity of planning a wireless sensor network is dependent on the aspects of optimization and on the application requirements. Even though Murphy's Law is applied everywhere in reality, a good planning algorithm will assist the designers to be aware of the short plates of their design and to improve them before the problems being exposed at the real deployment. A 3D multi-objective planning algorithm is proposed in this paper to provide solutions on the locations of nodes and their properties. It employs a developed ray-tracing scheme for sensing signal and radio propagation modelling. Therefore it is sensitive to the obstacles and makes the models of sensing coverage and link quality more practical compared with other heuristics that use ideal unit-disk models. The proposed algorithm aims at reaching an overall optimization on hardware cost, coverage, link quality and lifetime. Thus each of those metrics are modelled and normalized to compose a desirability function. Evolutionary algorithm is designed to efficiently tackle this NP-hard multi-objective optimization problem. The proposed algorithm is applicable for both indoor and outdoor 3D scenarios. Different parameters that affect the performance are analyzed through extensive experiments; two state-of-the-art algorithms are rebuilt and tested with the same configuration as that of the proposed algorithm. The results indicate that the proposed algorithm converges efficiently within 600 iterations and performs better than the compared heuristics.
Resumo:
Three-dimensional kinematic analysis provides quantitative assessment of upper limb motion and is used as an outcome measure to evaluate movement disorders. The aim of the present study is to present a set of kinematic metrics for quantifying characteristics of movement performance and the functional status of the subject during the execution of the activity of daily living (ADL) of drinking from a glass. Then, the objective is to apply these metrics in healthy people and a population with cervical spinal cord injury (SCI), and to analyze the metrics ability to discriminate between healthy and pathologic people. 19 people participated in the study: 7 subjects with metameric level C6 tetraplegia, 4 subjects with metameric level C7 tetraplegia and 8 healthy subjects. The movement was recorded with a photogrammetry system. The ADL of drinking was divided into a series of clearly identifiable phases to facilitate analysis. Metrics describing the time of the reaching phase, the range of motion of the joints analyzed, and characteristics of movement performance such as the efficiency, accuracy and smoothness of the distal segment and inter-joint coordination were obtained. The performance of the drinking task was more variable in people with SCI compared to the control group in relation to the metrics measured. Reaching time was longer in SCI groups. The proposed metrics showed capability to discriminate between healthy and pathologic people. Relative deficits in efficiency were larger in SCI people than in controls. These metrics can provide useful information in a clinical setting about the quality of the movement performed by healthy and SCI people during functional activities.
Resumo:
Fiber reinforced polymer composites (FRP) have found widespread usage in the repair and strengthening of concrete structures. FRP composites exhibit high strength-to-weight ratio, corrosion resistance, and are convenient to use in repair applications. Externally bonded FRP flexural strengthening of concrete beams is the most extended application of this technique. A common cause of failure in such members is associated with intermediate crack-induced debonding (IC debonding) of the FRP substrate from the concrete in an abrupt manner. Continuous monitoring of the concrete?FRP interface is essential to pre- vent IC debonding. Objective condition assessment and performance evaluation are challenging activities since they require some type of monitoring to track the response over a period of time. In this paper, a multi-objective model updating method integrated in the context of structural health monitoring is demonstrated as promising technology for the safety and reliability of this kind of strengthening technique. The proposed method, solved by a multi-objective extension of the particle swarm optimization method, is based on strain measurements under controlled loading. The use of permanently installed fiber Bragg grating (FBG) sensors embedded into the FRP-concrete interface or bonded onto the FRP strip together with the proposed methodology results in an automated method able to operate in an unsupervised mode.
Resumo:
Evolutionary algorithms are suitable to solve damage identification problems in a multi-objective context. However, the performance of these methods can deteriorate quickly with increasing noise intensities originating numerous uncertainties. In this paper, a statistic structural damage detection method formulated in a multi-objective context is proposed. The statistic analysis is implemented to take into account the uncertainties existing in the structural model and measured structural modal parameters. The presented method is verified by a number of simulated damage scenarios. The effects of noise and damage levels on damage detection are investigated.
Resumo:
This work shows the objective results of the acoustic quality of the Compañia de Jesús Church in Cordoba, Argentina. The acoustics of this Temple, built by the Orden Jesuita (Jesuit Order) two centuries ago and declared a World Heritage Site by UNESCO in 2000, is currently considered optimal by musicians as well as general public. In the second half of XVI century, with the Catholic reform, the need for improved speech intelligibility was given priority, being the Jesuit one of the orders that gave most importance to the construction of their temples. This church has constructive and spatial characteristics consistent with those needs. With the purpose of carrying out the acoustic assessment of the precincts, a work methodology that allowed comparing the results obtained from objective measures was developed by means of implementation of field measurements and space modeling, with subjective appreciation results, by developing surveys, with the aim of characterizing acoustically the sound space. This paper shows the comparison between the subjective results and objective criteria, which allowed important conclusions on the acoustic behavior of the temple to be obtained. In this way interesting data were obtained in relation to the subjective response of the acoustics of the church.
Resumo:
Dynamic and Partial Reconfiguration (DPR) allows a system to be able to modify certain parts of itself during run-time. This feature gives rise to the capability of evolution: changing parts of the configuration according to the online evaluation of performance or other parameters. The evolution is achieved through a bio-inspired model in which the features of the system are identified as genes. The objective of the evolution may not be a single one; in this work, power consumption is taken into consideration, together with the quality of filtering, as the measure of performance, of a noisy image. Pareto optimality is applied to the evolutionary process, in order to find a representative set of optimal solutions as for performance and power consumption. The main contributions of this paper are: implementing an evolvable system on a low-power Spartan-6 FPGA included in a Wireless Sensor Network node and, by enabling the availability of a real measure of power consumption at run-time, achieving the capability of multi-objective evolution, that yields different optimal configurations, among which the selected one will depend on the relative “weights” of performance and power consumption.
Resumo:
La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.
Resumo:
Background The aim of this study is to present face, content, and constructs validity of the endoscopic orthogonal video system (EndoViS) training system and determines its efficiency as a training and objective assessment tool of the surgeons’ psychomotor skills. Methods Thirty-five surgeons and medical students participated in this study: 11 medical students, 19 residents, and 5 experts. All participants performed four basic skill tasks using conventional laparoscopic instruments and EndoViS training system. Subsequently, participants filled out a questionnaire regarding the design, realism, overall functionality, and its capabilities to train hand–eye coordination and depth perception, rated on a 5-point Likert scale. Motion data of the instruments were obtained by means of two webcams built into a laparoscopic physical trainer. To identify the surgical instruments in the images, colored markers were placed in each instrument. Thirteen motion-related metrics were used to assess laparoscopic performance of the participants. Statistical analysis of performance was made between novice, intermediate, and expert groups. Internal consistency of all metrics was analyzed with Cronbach’s α test. Results Overall scores about features of the EndoViS system were positives. Participants agreed with the usefulness of tasks and the training capacities of EndoViS system (score >4). Results presented significant differences in the execution of three skill tasks performed by participants. Seven metrics showed construct validity for assessment of performance with high consistency levels. Conclusions EndoViS training system has been successfully validated. Results showed that EndoViS was able to differentiate between participants of varying laparoscopic experience. This simulator is a useful and effective tool to objectively assess laparoscopic psychomotor skills of the surgeons.
Resumo:
Working memory refers to the ability of the brain to store and manipulate information over brief time periods, ranging from seconds to minutes. As opposed to long-term memory, which is critically dependent upon hippocampal processing, critical substrates for working memory are distributed in a modality-specific fashion throughout cortex. N-methyl-D-aspartate (NMDA) receptors play a crucial role in the initiation of long-term memory. Neurochemical mechanisms underlying the transient memory storage required for working memory, however, remain obscure. Auditory sensory memory, which refers to the ability of the brain to retain transient representations of the physical features (e.g., pitch) of simple auditory stimuli for periods of up to approximately 30 sec, represents one of the simplest components of the brain working memory system. Functioning of the auditory sensory memory system is indexed by the generation of a well-defined event-related potential, termed mismatch negativity (MMN). MMN can thus be used as an objective index of auditory sensory memory functioning and a probe for investigating underlying neurochemical mechanisms. Monkeys generate cortical activity in response to deviant stimuli that closely resembles human MMN. This study uses a combination of intracortical recording and pharmacological micromanipulations in awake monkeys to demonstrate that both competitive and noncompetitive NMDA antagonists block the generation of MMN without affecting prior obligatory activity in primary auditory cortex. These findings suggest that, on a neurophysiological level, MMN represents selective current flow through open, unblocked NMDA channels. Furthermore, they suggest a crucial role of cortical NMDA receptors in the assessment of stimulus familiarity/unfamiliarity, which is a key process underlying working memory performance.
Resumo:
It is a familiar experience that we tend to close our eyes or divert our gaze when concentrating attention on cognitively demanding tasks. We report on the brain activity correlates of directing attention away from potentially competing visual processing and toward processing in another sensory modality. Results are reported from a series of positron-emission tomography studies of the human brain engaged in somatosensory tasks, in both "eyes open" and "eyes closed" conditions. During these tasks, there was a significant decrease in the regional cerebral blood flow in the visual cortex, which occurred irrespective of whether subjects had to close their eyes or were instructed to keep their eyes open. These task-related deactivations of the association areas belonging to the nonrelevant sensory modality were interpreted as being due to decreased metabolic activity. Previous research has clearly demonstrated selective activation of cortical regions involved in attention-demanding modality-specific tasks; however, the other side of this story appears to be one of selective deactivation of unattended areas.
Proactive and reactive inhibition during overt and covert actions. An electrical neuroimaging study.
Resumo:
Response inhibition is the ability to suppress inadequate but automatically activated, prepotent or ongoing response tendencies. In the framework of motor inhibition, two distinct operating strategies have been described: “proactive” and “reactive” control modes. In the proactive modality, inhibition is recruited in advance by predictive signals, and actively maintained before its enactment. Conversely, in the reactive control mode, inhibition is phasically enacted after the detection of the inhibitory signal. To date, ample evidence points to a core cerebral network for reactive inhibition comprising the right inferior frontal gyrus (rIFG), the presupplementary motor area (pre-SMA) and the basal ganglia (BG). Moreover, fMRI studies showed that cerebral activations during proactive and reactive inhibition largely overlap. These findings suggest that at least part of the neural network for reactive inhibition is recruited in advance, priming cortical regions in preparation for the upcoming inhibition. So far, proactive and reactive inhibitory mechanisms have been investigated during tasks in which the requested response to be stopped or withheld was an “overt” action execution (AE) (i.e., a movement effectively performed). Nevertheless, inhibitory mechanisms are also relevant for motor control during “covert actions” (i.e., potential motor acts not overtly performed), such as motor imagery (MI). MI is the conscious, voluntary mental rehearsal of action representations without any overt movement. Previous studies revealed a substantial overlap of activated motor-related brain networks in premotor, parietal and subcortical regions during overtly executed and imagined movements. Notwithstanding this evidence for a shared set of cerebral regions involved in encoding actions, whether or not those actions are effectively executed, the neural bases of motor inhibition during MI, preventing covert action from being overtly performed, in spite of the activation of the motor system, remain to be fully clarified. Taking into account this background, we performed a high density EEG study evaluating cerebral mechanisms and their related sources elicited during two types of cued Go/NoGo task, requiring the execution or withholding of an overt (Go) or a covert (MI) action, respectively. The EEG analyses were performed in two steps, with different aims: 1) Analysis of the “response phase” of the cued overt and covert Go/NoGo tasks, for the evaluation of reactive inhibitory control of overt and covert actions. 2) Analysis of the “preparatory phase” of the cued overt and covert Go/NoGo EEG datasets, focusing on cerebral activities time-locked to the preparatory signals, for the evaluation of proactive inhibitory mechanisms and their related neural sources. For these purposes, a spatiotemporal analysis of the scalp electric fields was applied on the EEG data recorded during the overt and covert Go/NoGo tasks. The spatiotemporal approach provide an objective definition of time windows for source analysis, relying on the statistical proof that the electric fields are different and thus generated by different neural sources. The analysis of the “response phase” revealed that key nodes of the inhibitory circuit, underpinning inhibition of the overt movement during the NoGo response, were also activated during the MI enactment. In both cases, inhibition relied on the activation of pre-SMA and rIFG, but with different temporal patterns of activation in accord with the intended “covert” or “overt” modality of motor performance. During the NoGo condition, the pre-SMA and rIFG were sequentially activated, pointing to an early decisional role of pre-SMA and to a later role of rIFG in the enactment of inhibitory control of the overt action. Conversely, a concomitant activation of pre-SMA and rIFG emerged during the imagined motor response. This latter finding suggested that an inhibitory mechanism (likely underpinned by the rIFG), could be prewired into a prepared “covert modality” of motor response, as an intrinsic component of the MI enactment. This mechanism would allow the rehearsal of the imagined motor representations, without any overt movement. The analyses of the “preparatory phase”, confirmed in both overt and covert Go/NoGo tasks the priming of cerebral regions pertaining to putative inhibitory network, reactively triggered in the following response phase. Nonetheless, differences in the preparatory strategies between the two tasks emerged, depending on the intended “overt” or “covert” modality of the possible incoming motor response. During the preparation of the overt Go/NoGo task, the cue primed the possible overt response programs in motor and premotor cortex. At the same time, through preactivation of a pre-SMA-related decisional mechanism, it triggered a parallel preparation for the successful response selection and/or inhibition during the subsequent response phase. Conversely, the preparatory strategy for the covert Go/NoGo task was centred on the goal-oriented priming of an inhibitory mechanism related to the rIFG that, being tuned to the instructed covert modality of the motor performance and instantiated during the subsequent MI enactment, allowed the imagined response to remain a potential motor act. Taken together, the results of the present study demonstrate a substantial overlap of cerebral networks activated during proactive recruitment and subsequent reactive enactment of motor inhibition in both overt and covert actions. At the same time, our data show that preparatory cues predisposed ab initio a different organization of the cerebral areas (in particular of the pre-SMA and rIFG) involved with sensorimotor transformations and motor inhibitory control for executed and imagined actions. During the preparatory phases of our cued overt and covert Go/NoGo tasks, the different adopted strategies were tuned to the “how” of the motor performance, reflecting the intended overt and covert modality of the possible incoming action.
Resumo:
O milho de segunda safra, também conhecido como milho safrinha, é definido como aquele semeado entre os meses de janeiro e março. Esta modalidade de cultivo atingiu no ano agrícola de 2013/2014 uma área plantada de 9,18 milhões de hectares, superior a área cultivada com milho primeira safra, que no mesmo período foi de 6,61 milhões de hectares. Na segunda safra, há alto risco de instabilidades climáticas, principalmente em decorrência de baixas temperaturas, geadas, má distribuição de chuvas e redução do fotoperíodo. Todos estes fatores prejudicam a atividade fotossintética do milho, reduzindo sua produtividade. No entanto, dada a importância deste cultivo, empresas públicas, privadas e universidades vêm buscando incrementar a produtividade e a estabilidade. Para isso, alguns caracteres são especialmente preconizados. Devido ao alto risco de perda por adversidades ambientais, muitos produtores investem pouco em adubação, principalmente adubação nitrogenada. Neste contexto, o desenvolvimento de plantas mais eficientes no uso e, ou, tolerantes ao estresse por nitrogênio, resultaria em maior segurança para o produtor. Não obstante, a precocidade tem elevada importância, já que materiais precoces reduzem o risco de perdas neste período. No entanto, a mesma deve estar sempre associada a alta produtividade. Assim, para a seleção simultânea destes caracteres, pode-se lançar mão de índices per se de resposta das plantas ao estresse, análises gráficas e, ou, índices de seleção simultânea. Adicionalmente, os valores genotípicos das linhagens para essas características, além de serem preditos via REML/BLUP single-trait (análise univariada), também podem ser preditos via REML/BLUP multi-trait (análise multivariada). Dessa forma, os valores genotípicos são corrigidos pela covariância existente entre os caracteres. Assim, o objetivo deste trabalho foi verificar a possibilidade de seleção simultânea para eficiência no uso e tolerância ao estresse por nitrogênio, além de plantas precoces e produtivas. Para isto, linhagens de milho tropical foram cultivadas e avaliadas para estes caracteres. Foram então simulados diversos cenários de seleção simultânea. A partir destes resultados, observou-se que o índice per se de resposta das plantas ao estresse Média Harmônica da Performance Relativa (MHPR) foi o mais eficiente na seleção de plantas eficientes no uso e tolerantes ao estresse por nitrogênio. Isto ocorreu devido a forte correlação desfavorável entre os índices que estimam a eficiência e a tolerância, além da superioridade e em acurácia, herdabilidade e ganhos com a seleção deste índice per se. Já para a seleção simultânea da produtividade e precocidade, o índice Aditivo de seleção simultânea, utilizando os valores genotípicos preditos via REML/BLUP single-trait se mostrou o mais eficiente, já que obteve ganhos satisfatórios em todos os caracteres e há a possibilidade de modular, de forma mais satisfatória, os ganhos em cada caractere. Conclui-se que a seleção simultânea tanto para eficiência no uso e tolerância ao estresse por nitrogênio, quanto para produtividade e precocidade são possíveis. Além disso, a escolha do melhor método de seleção simultânea depende da magnitude e do sentido da correlação entre os caracteres.
Resumo:
A percepção sobre as causas dos acontecimentos faz parte da vida cotidiana. Atribui-se causas aos eventos na busca de entendimento que permita prever, controlar e alterar resultados futuros. Questões como \"Por que não passei na prova?\" e \"Porque fui o primeiro colocado no vestibular?\" conduzem à percepção de uma causa explicativa, seja para o fracasso ou para o sucesso. A Teoria da Atribuição Causal tem sido examinada para compreender e explicar como as pessoas interpretam os determinantes de seu sucesso ou fracasso em situações de desempenho. As causas percebidas estarão relacionadas à percepção de cada indivíduo sobre o evento, o que não implica causalidade real, dado que a ação será efetivada de acordo com a percepção de cada indivíduo sobre o evento. A abordagem teórica utilizada nessa pesquisa foi a Teoria da Atribuição Causal, proposta por Bernard Weiner, no contexto educacional, com foco nas atribuições causais para sucesso e fracasso acadêmicos. Neste cenário, o objetivo principal da pesquisa foi identificar as causas percebidas como explicativas do desempenho acadêmico de estudantes do curso de Ciências Contábeis. Buscou-se também obter evidências e subsidiar a discussão sobre a relação entre o sucesso e o fracasso acadêmico, a modalidade de ensino, a autoestima e o perfil do estudante. Os dados foram coletados por meio de um questionário aplicado aos estudantes de Ciências Contábeis de duas Universidades Federais que oferecem o curso em duas modalidades de ensino (presencial e a distância) e 738 respostas válidas foram obtidas para análise. O questionário foi estruturado em três blocos (I - desempenho e causas percebidas, II - mensuração da autoestima e III - perfil do estudante). Os resultados apresentaram um perfil com idade média dos estudantes de 27,4 anos (34,27 na modalidade EaD e 24,87 na modalidade Presencial). A maioria dos estudantes (83%) exercia atividade remunerada (90% na EaD e 80% na presencial) e as mulheres representaram a maioria dos respondentes (62% na modalidade EaD e 58% na modalidade presencial). As causas internas, especificamente o esforço e a capacidade, foram mais indicadas como explicativas do sucesso acadêmico e as causas externas, especificamente a dificuldade da tarefa, a flexibilidade de horário e a influência negativa do professor, foram as mais indicadas como explicativas do fracasso acadêmico. Como os resultados apontam que os estudantes indicaram com frequência a própria capacidade para explicar o sucesso, pode-se admitir manifestação da tendência autoservidora, que contribui para a manutenção da autoestima, diante da influência positiva na motivação. Entre as causas do sucesso, a capacidade associou-se a um nível mais elevado de autoestima e a causa sorte associou-se a um nível mais baixo de autoestima. Entre as causas do fracasso, a dificuldade da tarefa associou-se ao nível mais baixo de autoestima. Uma análise geral permite observar que os estudantes dedicam pouco tempo aos estudos, atribuem sucesso principalmente a si mesmos e o fracasso a terceiros, e apresentam uma elevada autoestima associada principalmente ao sucesso atribuído à capacidade. Em futuras pesquisas, recomenda-se estudos pilotos, com objetivo de definir outras atribuições causais para elaboração de novos instrumentos de coleta de dados, por meio de abordagem metodológica qualitativa, que possam ampliar os achados e contribuir com a literatura.
Resumo:
Tuning compilations is the process of adjusting the values of a compiler options to improve some features of the final application. In this paper, a strategy based on the use of a genetic algorithm and a multi-objective scheme is proposed to deal with this task. Unlike previous works, we try to take advantage of the knowledge of this domain to provide a problem-specific genetic operation that improves both the speed of convergence and the quality of the results. The evaluation of the strategy is carried out by means of a case of study aimed to improve the performance of the well-known web server Apache. Experimental results show that a 7.5% of overall improvement can be achieved. Furthermore, the adaptive approach has shown an ability to markedly speed-up the convergence of the original strategy.
Resumo:
In this work, we analyze the effect of demand uncertainty on the multi-objective optimization of chemical supply chains (SC) considering simultaneously their economic and environmental performance. To this end, we present a stochastic multi-scenario mixed-integer linear program (MILP) with the unique feature of incorporating explicitly the demand uncertainty using scenarios with given probability of occurrence. The environmental performance is quantified following life cycle assessment (LCA) principles, which are represented in the model formulation through standard algebraic equations. The capabilities of our approach are illustrated through a case study. We show that the stochastic solution improves the economic performance of the SC in comparison with the deterministic one at any level of the environmental impact.