17 resultados para Time-resolved methods
em Instituto Polit
Resumo:
S100A6 is a small EF-hand calcium- and zinc-binding protein involved in the regulation of cell proliferation and cytoskeletal dynamics. It is overexpressed in neurodegenerative disorders and a proposed marker for Amyotrophic Lateral Sclerosis (ALS). Following recent reports of amyloid formation by S100 proteins, we investigated the aggregation properties of S100A6. Computational analysis using aggregation predictors Waltz and Zyggregator revealed increased propensity within S100A6 helices HI and HIV. Subsequent analysis of Thioflavin-T binding kinetics under acidic conditions elicited a very fast process with no lag phase and extensive formation of aggregates and stacked fibrils as observed by electron microscopy. Ca2+ exerted an inhibitory effect on the aggregation kinetics, which could be reverted upon chelation. An FT-IR investigation of the early conformational changes occurring under these conditions showed that Ca2+ promotes anti-parallel β-sheet conformations that repress fibrillation. At pH 7, Ca2+ rendered the fibril formation kinetics slower: time-resolved imaging showed that fibril formation is highly suppressed, with aggregates forming instead. In the absence of metals an extensive network of fibrils is formed. S100A6 oligomers, but not fibrils, were found to be cytotoxic, decreasing cell viability by up to 40%. This effect was not observed when the aggregates were formed in the presence of Ca2+. Interestingly, native S1006 seeds SOD1 aggregation, shortening its nucleation process. This suggests a cross-talk between these two proteins involved in ALS. Overall, these results put forward novel roles for S100 proteins, whose metal-modulated aggregation propensity may be a key aspect in their physiology and function.
Resumo:
Nos dias de hoje as diferentes indústrias e sectores de atividade económica assentam os seus pilares de desenvolvimento na procura constante de fontes de melhoria, para que assim seja possível melhorar a relação qualidade / preço. Embora no setor industrial as melhorias e inovações tecnológicas surjam a cada dia, estas por si não chegam. Grande parte da otimização incorrida, quer na indústria de manufatura, quer na indústria de serviços, surge da “simples” eliminação de desperdícios, e da procura constante por fontes de melhoria. Com o objetivo traçado, a Grohe Portugal Componentes Sanitários, Lda. propôs a eliminação de desperdícios no âmbito do abastecimento de componentes às linhas de montagem existentes na sua fábrica em Albergaria-a-Velha. Este processo passa não só por uma otimização do tempo de abastecimento e das quantidades de abastecimento, mas também consiste na reestruturação das diferentes rotinas de abastecimento. Todo este processo de otimização estará assente no conceito de Mizusumashi. O Mizusumashi, ou comboio logístico como muitas vezes é referenciado, surge com o objetivo de separar a tarefa de abastecimento da função de montagem. A sua origem surge da adaptação do conceito de Milk Run (volta do leiteiro) à logística interna. Torna-se de relevo referir que, para que este “simples” conceito funcione com uma eficiência que proporcione a sua aplicação, são vastos os fatores que necessitam de ajustamentos ou mesmo, em alguns casos, de uma reestruturação completa. O trabalho desenvolvido nestas instalações fabris, e que culminou neste documento, teve como princípio a análise, avaliação e implementação de melhorias no sistema de abastecimento às linhas de montagem. Todo o processo de abastecimento foi analisado e desconstruído nas suas componentes, para que assim fosse possível desenhar o plano de reestruturação indicado. Foram implementadas melhorias de layout, tempos e tarefas. Os resultados foram positivos tendo em conta o objetivo inicial. Todo este plano foi pensado e documentado com o objetivo de tornar este sistema adaptável a possíveis mudanças. Foi possível então criar um sistema voltado para um plano de melhoria contínua. Com um abastecimento normalizado e rotinado a gestão de stocks é mais precisa diminuindo assim os desperdícios inerentes a estas funções.
Resumo:
This paper proposes an energy resources management methodology based on three distinct time horizons: day-ahead scheduling, hour-ahead scheduling, and real-time scheduling. In each scheduling process it is necessary the update of generation and consumption operation and of the storage and electric vehicles storage status. Besides the new operation condition, it is important more accurate forecast values of wind generation and of consumption using results of in short-term and very short-term methods. A case study considering a distribution network with intensive use of distributed generation and electric vehicles is presented.
Resumo:
Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.
Resumo:
In this work, a microwave-assisted extraction (MAE) methodology was compared with several conventional extraction methods (Soxhlet, Bligh & Dyer, modified Bligh & Dyer, Folch, modified Folch, Hara & Radin, Roese-Gottlieb) for quantification of total lipid content of three fish species: horse mackerel (Trachurus trachurus), chub mackerel (Scomber japonicus), and sardine (Sardina pilchardus). The influence of species, extraction method and frozen storage time (varying from fresh to 9 months of freezing) on total lipid content was analysed in detail. The efficiencies of methods MAE, Bligh & Dyer, Folch, modified Folch and Hara & Radin were the highest and although they were not statistically different, differences existed in terms of variability, with MAE showing the highest repeatability (CV = 0.034). Roese-Gottlieb, Soxhlet, and modified Bligh & Dyer methods were very poor in terms of efficiency as well as repeatability (CV between 0.13 and 0.18).
Resumo:
GOAL: The manufacturing and distribution of strips of instant thin - layer chromatography with silica gel (ITLC - SG) (reference method) is currently discontinued so there is a need for an alternative method f or the determination of radiochemical purity (RCP) of 99m Tc - tetrofosmin. This study aims to compare five alternative methods proposed by the producer to determine the RCP of 99m Tc - tetrofosmin. METHODS: Nineteen vials of tetrofosmin were radiolabelled with 99m Tc and the percentages of the RCP were determined. Five different methods were compared with the standard RCP testing method (ITLC - SG, 2x20 cm): Whatman 3MM (1x10 cm) with acetone and dichloro - methane (method 1); Whatman 3MM (1x1 0 cm) with ethyl acetate (method 2); aluminum oxide - coated plastic thin - layer chromatography (TLC) plate (1x10 cm) and ethanol (method 3); Whatman 3MM (2x20 cm) with acetone and dichloro - methane (method 4); solid - phase extraction method C18 cartridge (meth od 5). RESULTS: The average values of RCP were 95,30% ± 1,28% (method 1), 93,95 ± 0,61% (method 2), 96,85% ± 0,93% (method 3), 92,94% ± 0,99% (method 4) and 96,25% ± 2,57% (method 5) (n=12 each), and 93,15% ± 1,13% for the standard method (n=19). There we re statistical significant differences in the values obtained for methods 1 (P=0,001), 3 (P=0,000) and 5 (P=0,004), and there were no statistical significant differences in the values obtained for methods 2 (P=0,113) and 4 (P=0,327). CONCLUSION: From the results obtained, methods 2 and 4 showed a higher correlation with the standard method. Unlike method 4, method 2 is less time - consuming than the reference method and can overcome the problems associated with the solvent toxicity. The remaining methods (1, 3 and 5) tended to overestimate RCP value compared to the standard method.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
The IEEE 802.15.4 is the most widespread used protocol for Wireless Sensor Networks (WSNs) and it is being used as a baseline for several higher layer protocols such as ZigBee, 6LoWPAN or WirelessHART. Its MAC (Medium Access Control) supports both contention-free (CFP, based on the reservation of guaranteed time-slots GTS) and contention based (CAP, ruled by CSMA/CA) access, when operating in beacon-enabled mode. Thus, it enables the differentiation between real-time and best-effort traffic. However, some WSN applications and higher layer protocols may strongly benefit from the possibility of supporting more traffic classes. This happens, for instance, for dense WSNs used in time-sensitive industrial applications. In this context, we propose to differentiate traffic classes within the CAP, enabling lower transmission delays and higher success probability to timecritical messages, such as for event detection, GTS reservation and network management. Building upon a previously proposed methodology (TRADIF), in this paper we outline its implementation and experimental validation over a real-time operating system. Importantly, TRADIF is fully backward compatible with the IEEE 802.15.4 standard, enabling to create different traffic classes just by tuning some MAC parameters.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.
Resumo:
Forest fires dynamics is often characterized by the absence of a characteristic length-scale, long range correlations in space and time, and long memory, which are features also associated with fractional order systems. In this paper a public domain forest fires catalogue, containing information of events for Portugal, covering the period from 1980 up to 2012, is tackled. The events are modelled as time series of Dirac impulses with amplitude proportional to the burnt area. The time series are viewed as the system output and are interpreted as a manifestation of the system dynamics. In the first phase we use the pseudo phase plane (PPP) technique to describe forest fires dynamics. In the second phase we use multidimensional scaling (MDS) visualization tools. The PPP allows the representation of forest fires dynamics in two-dimensional space, by taking time series representative of the phenomena. The MDS approach generates maps where objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to better understand forest fires behaviour.
Resumo:
The intensive use of distributed generation based on renewable resources increases the complexity of power systems management, particularly the short-term scheduling. Demand response, storage units and electric and plug-in hybrid vehicles also pose new challenges to the short-term scheduling. However, these distributed energy resources can contribute significantly to turn the shortterm scheduling more efficient and effective improving the power system reliability. This paper proposes a short-term scheduling methodology based on two distinct time horizons: hour-ahead scheduling, and real-time scheduling considering the point of view of one aggregator agent. In each scheduling process, it is necessary to update the generation and consumption operation, and the storage and electric vehicles status. Besides the new operation condition, more accurate forecast values of wind generation and consumption are available, for the resulting of short-term and very short-term methods. In this paper, the aggregator has the main goal of maximizing his profits while, fulfilling the established contracts with the aggregated and external players.
Resumo:
Over the past decades several approaches for schedulability analysis have been proposed for both uni-processor and multi-processor real-time systems. Although different techniques are employed, very little has been put forward in using formal specifications, with the consequent possibility for mis-interpretations or ambiguities in the problem statement. Using a logic based approach to schedulability analysis in the design of hard real-time systems eases the synthesis of correct-by-construction procedures for both static and dynamic verification processes. In this paper we propose a novel approach to schedulability analysis based on a timed temporal logic with time durations. Our approach subsumes classical methods for uni-processor scheduling analysis over compositional resource models by providing the developer with counter-examples, and by ruling out schedules that cause unsafe violations on the system. We also provide an example showing the effectiveness of our proposal.
Resumo:
Microcystin-LR (MC-LR) is a dangerous toxin found in environmental waters, quantified by high performance liquid chromatography and/or enzyme-linked immunosorbent assays. Quick, low cost and on-site analysis is thus required to ensure human safety and wide screening programs. This work proposes label-free potentiometric sensors made of solid-contact electrodes coated with a surface imprinted polymer on the surface of Multi-Walled Carbon NanoTubes (CNTs) incorporated in a polyvinyl chloride membrane. The imprinting effect was checked by using non-imprinted materials. The MC-LR sensitive sensors were evaluated, characterized and applied successfully in spiked environmental waters. The presented method offered the advantages of low cost, portability, easy operation and suitability for adaptation to flow methods.
Resumo:
In this paper we present the operational matrices of the left Caputo fractional derivative, right Caputo fractional derivative and Riemann–Liouville fractional integral for shifted Legendre polynomials. We develop an accurate numerical algorithm to solve the two-sided space–time fractional advection–dispersion equation (FADE) based on a spectral shifted Legendre tau (SLT) method in combination with the derived shifted Legendre operational matrices. The fractional derivatives are described in the Caputo sense. We propose a spectral SLT method, both in temporal and spatial discretizations for the two-sided space–time FADE. This technique reduces the two-sided space–time FADE to a system of algebraic equations that simplifies the problem. Numerical results carried out to confirm the spectral accuracy and efficiency of the proposed algorithm. By selecting relatively few Legendre polynomial degrees, we are able to get very accurate approximations, demonstrating the utility of the new approach over other numerical methods.