830 resultados para Gradient-based approaches
Resumo:
Current views of the nature of knowledge and of learning suggest that instructional approaches in science education pay closer attention to how students learn rather than on teaching. This study examined the use of approaches to teaching science based on two contrasting perspectives in learning, social constructivist and traditional, and the effects they have on students' attitudes and achievement. Four categories of attitudes were measured using the Upper Secondary Attitude Questionnaire: Attitude towards school, towards the importance of science, towards science as a career, and towards science as a subject in school. Achievement was measured by average class grades and also with a researcher/teacher constructed 30-item test that involved three sub-scales of items based on knowledge, and applications involving near-transfer and far-transfer of concepts. The sample consisted of 202 students in nine intact classrooms in chemistry at a large high school in Miami, Florida, and involved two teachers. Results were analyzed using a two-way analysis of covariance (ANCOVA) with a pretest in attitude as the covariate for attitudes and prior achievement as the covariate for achievement. A comparison of the adjusted mean scores was made between the two groups and between females and males. ^ With constructivist-based teaching, students showed more favorable attitude towards science as a subject, obtained significantly higher scores in class achievement, total achievement and achievement on the knowledge sub-scale of the knowledge and application test. Students in the traditional group showed more favorable attitude towards school. Females showed significantly more positive attitude towards the importance of science and obtained significantly higher scores in class achievement. No significant interaction effects were obtained for method of instruction by gender. ^ This study lends some support to the view that constructivist-based approaches to teaching science is a viable alternative to traditional modes of teaching. It is suggested that in science education, more consideration be given to those aspects of classroom teaching that foster closer coordination between social influences and individual learning. ^
Resumo:
Treatment of sensory neuropathies, whether inherited or caused by trauma, the progress of diabetes, or other disease states, are among the most difficult problems in modern clinical practice. Cell therapy to release antinociceptive agents near the injured spinal cord would be the logical next step in the development of treatment modalities. But few clinical trials, especially for chronic pain, have tested the transplant of cells or a cell line to treat human disease. The history of the research and development of useful cell-transplant-based approaches offers an understanding of the advantages and problems associated with these technologies, but as an adjuvant or replacement for current pharmacological treatments, cell therapy is a likely near future clinical tool for improved health care.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
Intracochlear trauma from surgical insertion of bulky electrode arrays and inadequate pitch perception are areas of concern with current hand-assembled commercial cochlear implants. Parylene thin-film arrays with higher electrode densities and lower profiles are a potential solution, but lack rigidity and hence depend on manually fabricated permanently attached polyethylene terephthalate (PET) tubing based bulky backing devices. As a solution, we investigated a new backing device with two sub-systems. The first sub-system is a thin poly(lactic acid) (PLA) stiffener that will be embedded in the parylene array. The second sub-system is an attaching and detaching mechanism, utilizing a poly(N-vinylpyrrolidone)-block-poly(d,l-lactide) (PVP-b-PDLLA) copolymer-based biodegradable and water soluble adhesive, that will help to retract the PET insertion tool after implantation. As a proof-of-concept of sub-system one, a microfabrication process for patterning PLA stiffeners embedded in parylene has been developed. Conventional hotembossing, mechanical micromachining, and standard cleanroom processes were integrated for patterning fully released and discrete stiffeners coated with parylene. The released embedded stiffeners were thermoformed to demonstrate that imparting perimodiolar shapes to stiffener-embedded arrays will be possible. The developed process when integrated with the array fabrication process will allow fabrication of stiffener-embedded arrays in a single process. As a proof-of-concept of sub-system two, the feasibility of the attaching and detaching mechanism was demonstrated by adhering 1x and 1.5x scale PET tube-based insertion tools and PLA stiffeners embedded in parylene using the copolymer adhesive. The attached devices survived qualitative adhesion tests, thermoforming, and flexing. The viability of the detaching mechanism was tested by aging the assemblies in-vitro in phosphate buffer solution. The average detachment times, 2.6 minutes and 10 minutes for 1x and 1.5x scale devices respectively, were found to be clinically relevant with respect to the reported array insertion times during surgical implantation. Eventually, the stiffener-embedded arrays would not need to be permanently attached to current insertion tools which are left behind after implantation and congest the cochlear scala tympani chamber. Finally, a simulation-based approach for accelerated failure analysis of PLA stiffeners and characterization of PVP-b-PDLLA copolymer adhesive has been explored. The residual functional life of embedded PLA stiffeners exposed to body-fluid and thereby subjected to degradation and erosion has been estimated by simulating PLA stiffeners with different parylene coating failure types and different PLA types for a given parylene coating failure type. For characterizing the PVP-b-PDLLA copolymer adhesive, several formulations of the copolymer adhesive were simulated and compared based on the insertion tool detachment times that were predicted from the dissolution, degradation, and erosion behavior of the simulated adhesive formulations. Results indicate that the simulation-based approaches could be used to reduce the total number of time consuming and expensive in-vitro tests that must be conducted.
Resumo:
Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.
Resumo:
The research project aims to improve the Design for Additive Manufacturing of metal components. Firstly, the scenario of Additive Manufacturing is depicted, describing its role in Industry 4.0 and in particular focusing on Metal Additive Manufacturing technologies and the Automotive sector applications. Secondly, the state of the art in Design for Additive Manufacturing is described, contextualizing the methodologies, and classifying guidelines, rules, and approaches. The key phases of product design and process design to achieve lightweight functional designs and reliable processes are deepened together with the Computer-Aided Technologies to support the approaches implementation. Therefore, a general Design for Additive Manufacturing workflow based on product and process optimization has been systematically defined. From the analysis of the state of the art, the use of a holistic approach has been considered fundamental and thus the use of integrated product-process design platforms has been evaluated as a key element for its development. Indeed, a computer-based methodology exploiting integrated tools and numerical simulations to drive the product and process optimization has been proposed. A validation of CAD platform-based approaches has been performed, as well as potentials offered by integrated tools have been evaluated. Concerning product optimization, systematic approaches to integrate topology optimization in the design have been proposed and validated through product optimization of an automotive case study. Concerning process optimization, the use of process simulation techniques to prevent manufacturing flaws related to the high thermal gradients of metal processes is developed, providing case studies to validate results compared to experimental data, and application to process optimization of an automotive case study. Finally, an example of the product and process design through the proposed simulation-driven integrated approach is provided to prove the method's suitability for effective redesigns of Additive Manufacturing based high-performance metal products. The results are then outlined, and further developments are discussed.
Resumo:
Global population growth reflects how humans increasingly exploited Earth's resources. Urbanization develops along with anthropization. It is estimated that nearly 60% of the world's population lives in urban areas, which symbolize the denaturalized dimension of current modernity. Cities are artificial ecosystems that suffer most from environmental issues and climate change. The Urban Heat Island (UHI) effect is a common microclimatic phenomenon affecting cities, which causes considerable differences between urban and rural areas temperatures. Among the driving factors, the lack of vegetation in urban settlements can damage both humans and the environment (health diseases, heat waves caused deaths, biodiversity loss, and so on). As the world continues to urbanize, sustainable development increasingly depends on successful management of urban areas. To enhance cities’ resilience, Nature-based Solutions (NbSs), are defined as an umbrella concept that encompasses a wide range of ecosystem-based approaches and actions to climate change adaptation (CCA) and disaster risk reduction (DRR). This paper analyzes a 15-days study on air temperature trends carried out in Isla, a small locality in the Maltese archipelago, and proposes Nature-based Solutions-characterized scenarios to mitigate the Urban Heat Island effect the Mediterranean city is affected by. The results demonstrates how in some areas where vegetation is present, lower temperatures are recorded than in areas where vegetation is absent or scarce. It also appeared that in one location, the specific type of vegetation does not contribute to high temperature mitigation, whereas in another one, different environmental parameters can influence the measurements. Among the case-specific Nature-based Solutions proposed there are vertical greening (green wall, façades, ground based greening, etc.), tree lines, green canopy, and green roofs.
Resumo:
This study investigated the disclosure of HIV-positive serostatus to sexual partners by heterosexual and bisexual men, selected in centers for HIV/AIDS care. In 250 interviews, we investigated disclosure of serostatus to partners, correlating disclosure to characteristics of relationships. The focus group further explored barriers to maintenance/establishment of partnerships and their association with disclosure and condom use. Fear of rejection led to isolation and distress, thus hindering disclosure to current and new partners. Disclosure requires trust and was more frequent to steady partners, to partners who were HIV-positive themselves, to female partners, and by heterosexuals, occurring less frequently with commercial sex workers. Most interviewees reported consistent condom use. Unprotected sex was more frequent with seropositive partners. Suggestions to enhance comprehensive care for HIV-positive men included stigma management, group activities, and human rights-based approaches involving professional education in care for sexual health, disclosure, and care of "persons living with HIV".
Resumo:
For obtaining accurate and reliable gene expression results it is essential that quantitative real-time RT-PCR (qRT-PCR) data are normalized with appropriate reference genes. The current exponential increase in postgenomic studies on the honey bee, Apis mellifera, makes the standardization of qRT-PCR results an important task for ongoing community efforts. For this aim we selected four candidate reference genes (actin, ribosomal protein 49, elongation factor 1-alpha, tbp-association factor) and used three software-based approaches (geNorm, BestKeeper and NormFinder) to evaluate the suitability of these genes as endogenous controls. Their expression was examined during honey bee development, in different tissues, and after juvenile hormone exposure. Furthermore, the importance of choosing an appropriate reference gene was investigated for two developmentally regulated target genes. The results led us to consider all four candidate genes as suitable genes for normalization in A. mellifera. However, each condition evaluated in this study revealed a specific set of genes as the most appropriated ones.
Resumo:
The generator-coordinate method is a flexible and powerful reformulation of the variational principle. Here we show that by introducing a generator coordinate in the Kohn-Sham equation of density-functional theory, excitation energies can be obtained from ground-state density functionals. As a viability test, the method is applied to ground-state energies and various types of excited-state energies of atoms and ions from the He and the Li isoelectronic series. Results are compared to a variety of alternative DFT-based approaches to excited states, in particular time-dependent density-functional theory with exact and approximate potentials.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
The process of establishing long-range neuronal connections can be divided into at least three discrete steps. First, axons need to be stimulated to grow and this growth must be towards appropriate targets. Second, after arriving at their target, axons need to be directed to their topographically appropriate position and in some cases, such as in cortical structures, they must grow radially to reach the correct laminar layer Third, axons then arborize and form synaptic connections with only a defined subpopulation of potential post-synaptic partners. Attempts to understand these mechanisms in the visual system have been ongoing since pioneer studies in the 1940s highlighted the specificity of neuronal connections in the retino-tectal pathway. These classical systems-based approaches culminated in the 1990s with the discovery that Eph-ephrin repulsive interactions were involved in topographical mapping. In marked contrast, it was the cloning of the odorant receptor family that quickly led to a better understanding of axon targeting in the olfactory system. The last 10 years have seen the olfactory pathway rise in prominence as a model system for axon guidance. Once considered to be experimentally intractable, it is now providing a wealth of information on all aspects of axon guidance and targeting with implications not only for our understanding of these mechanisms in the olfactory system but also in other regions of the nervous system.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
The evolution of new technology and its increasing use, have for some years been making the existence of informal learning more and more transparent, especially among young and older adults in both Higher Education and workplace contexts. However, the nature of formal and non-formal, course-based, approaches to learning has made it hard to accommodate these informal processes satisfactorily, and although technology bring us near to the solution, it has not yet achieved. TRAILER project aims to address this problem by developing a tool for the management of competences and skills acquired through informal learning experiences, both from the perspective of the user and the institution or company. This paper describes the research and development main lines of this project.
Resumo:
No decorrer dos últimos anos, os agentes (inteligentes) de software foram empregues como um método para colmatar as dificuldades associadas com a gestão, partilha e reutilização de um crescente volume de informação, enquanto as ontologias foram utilizadas para modelar essa mesma informação num formato semanticamente explícito e rico. À medida que a popularidade da Web Semântica aumenta e cada vez informação é partilhada sob a forma de ontologias, o problema de integração desta informação amplifica-se. Em semelhante contexto, não é expectável que dois agentes que pretendam cooperar utilizem a mesma ontologia para descrever a sua conceptualização do mundo. Inclusive pode revelar-se necessário que agentes interajam sem terem conhecimento prévio das ontologias utilizadas pelos restantes, sendo necessário que as conciliem em tempo de execução num processo comummente designado por Mapeamento de Ontologias [1]. O processo de mapeamento de ontologias é normalmente oferecido como um serviço aos agentes de negócio, podendo ser requisitado sempre que seja necessário produzir um alinhamento. No entanto, tendo em conta que cada agente tem as suas próprias necessidades e objetivos, assim como a própria natureza subjetiva das ontologias que utilizam, é possível que tenham diferentes interesses relativamente ao processo de alinhamento e que, inclusive, recorram aos serviços de mapeamento que considerem mais convenientes [1]. Diferentes matchers podem produzir resultados distintos e até mesmo contraditórios, criando-se assim conflitos entre os agentes. É necessário que se proceda então a uma tentativa de resolução dos conflitos existentes através de um processo de negociação, de tal forma que os agentes possam chegar a um consenso relativamente às correspondências que devem ser utilizadas na tradução de mensagens a trocar. A resolução de conflitos é considerada uma métrica de grande importância no que diz respeito ao processo de negociação [2]: considera-se que existe uma maior confiança associada a um alinhamento quanto menor o número de conflitos por resolver no processo de negociação que o gerou. Desta forma, um alinhamento com um número elevado de conflitos por resolver apresenta uma confiança menor que o mesmo alinhamento associado a um número elevado de conflitos resolvidos. O processo de negociação para que dois ou mais agentes gerem e concordem com um alinhamento é denominado de Negociação de Mapeamentos de Ontologias. À data existem duas abordagens propostas na literatura: (i) baseadas em Argumentação (e.g. [3] [4]) e (ii) baseadas em Relaxamento [5] [6]. Cada uma das propostas expostas apresenta um número de vantagens e limitações. Foram propostas várias formas de combinação das duas técnicas [2], com o objetivo de beneficiar das vantagens oferecidas e colmatar as suas limitações. No entanto, à data, não são conhecidas experiências documentadas que possam provar tal afirmação e, como tal, não é possível atestar que tais combinações tragam, de facto, o benefício que pretendem. O trabalho aqui apresentado pretende providenciar tais experiências e verificar se a afirmação de melhorias em relação aos resultados das técnicas individuais se mantém. Com o objetivo de permitir a combinação e de colmatar as falhas identificadas, foi proposta uma nova abordagem baseada em Relaxamento, que é posteriormente combinada com as abordagens baseadas em Argumentação. Os seus resultados, juntamente com os da combinação, são aqui apresentados e discutidos, sendo possível identificar diferenças nos resultados gerados por combinações diferentes e possíveis contextos de utilização.