871 resultados para Construction. Indicators System. Performance. Ergonomics. Validation
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Several studies have shown that different stretching routines can lead to decreases on acute neuromuscular system performance. Although the deficit in muscle strength mediated by different methods of stretching has been systematically observed, few studies have investigated the possible existence of a dose-response relationship between the amount of stretching and muscle strength deficit in older adults. In this context, the objective of this study was to investigate the acute effect of two different stretching volumes on isometric force-time curve (Cf-t) in elderly women. The study included 13 older women (64.08 ± 4.27 years, 69.98 ± 10.56 kg, 157.90 ± 8.66 cm, 28.25 ± 4.22 kg/m²). The participants visited the laboratory for five consecutive days, among which the first two were used for familiarization. During the other three days the participants underwent experimental conditions: control (C) stretch 30 seconds (AE30) and stretch 60 seconds (AE60). For the AE30 and AE60 conditions, three series of passive static stretching were performed, with duration of 30 and 60 seconds, respectively. The experimental conditions were performed with an interval of at least 24 hours between them and the order of execution was randomized. The recording of isometric Cf-t of the knee extensor muscles was performed in extensor chair connected to a force transducer. Measurements were recorded immediately after each experimental condition, for five seconds. For statistical analysis, descriptive procedures were used and ANOVA one way to check possible changes on the Maximal Voluntary Contraction (CVM) and Peak Rate of Force Development (TDFP) among the three conditions (p <0.05). The ANOVA showed no statistically significant difference for CVM and TDFP, between the three conditions. It can be concluded that different volumes of static stretching, three sets ...(Complete abstract click electronic access below)
Resumo:
The Biosusceptometry AC (BAC) is a research tool that has been extensively explored by the group Biomagnetism IBB-UNESP for monitoring of the gastrointestinal tract, its response to a known drug or in vivo performance of solid dosage forms. During this period the BAC, which has the characteristics of high sensitivity and low cost, has been developed primarily for recording signals contraction of activity and traffic human gastrointestinal tract. With the possibility of producing images with this instrumentation, it was possible to evaluate different situations in vitro and in vivo for physiological studies and pharmaceuticals. Considering the good performance of this system to produce planar images, the first aim of the BAC system tomography (TBAC) was to evaluate the system performance of BAC to produce tomographic images of phantoms ferromagnetic for a single channel system. All these applications were only possible because of their sensitivity to materials of high magnetic suscepitibility as ferrite, which allow to produce an electrical signal proportional to the variation of the magnetic flux generated by the presence of magnetic marker next to a first-order gradiometer. Measuring this variation at various points was possible to generate planar images that recently came to be produced in systems with multiple detectors, said multi-channels. From planar images, also producing tomographic images of simulators BAC bars in a system of 13 channels using only the center channel, with good results when applied to simple objects as one and two bars. When testing the resolution of the system with more elaborate forms the quality and resolution of images reconstructed is not satisfactory, which would be solved by increasing the spatial sampling rate and hence the acquisition time. The present system works with an acquisition time of about five hours. Whereas this system will be applied for in vivo experiments, the acquisition time became a ...
Resumo:
In this paper we presente a classification system that uses a combination of texture features from stromal regions: Haralick features and Local Binary Patterns (LBP) in wavelet domain. The system has five steps for classification of the tissues. First, the stromal regions were detected and extracted using segmentation techniques based on thresholding and RGB colour space. Second, the Wavelet decomposition was applied in the extracted regions to obtain the Wavelet coefficients. Third, the Haralick and LBP features were extracted from the coefficients. Fourth, relevant features were selected using the ANOVA statistical method. The classication (fifth step) was performed with Radial Basis Function (RBF) networks. The system was tested in 105 prostate images, which were divided into three groups of 35 images: normal, hyperplastic and cancerous. The system performance was evaluated using the area under the ROC curve and resulted in 0.98 for normal versus cancer, 0.95 for hyperplasia versus cancer and 0.96 for normal versus hyperplasia. Our results suggest that texture features can be used as discriminators for stromal tissues prostate images. Furthermore, the system was effective to classify prostate images, specially the hyperplastic class which is the most difficult type in diagnosis and prognosis.
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciências Ambientais - Sorocaba
Resumo:
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.
Resumo:
This article presents an overview of relevant issues to be considered in the development of standardized phytochemical preparations, focusing on the use of the spouted bed as a drying method. Aspects related to the effects of feed composition properties and processing parameters on system performance and product quality are addressed. From the information presented, it can be concluded that the spouted bed technology can be successfully applied for production of high-quality phytochemical preparations suitable for food and pharmaceutical purposes, considering the requirements for product safety, quality, and efficacy. Nevertheless, it should be emphasized that, at this time, the proposed technology is appropriate for small-scale production, mainly due to difficulties concerning scale-up, modeling, and the simulation of spouted bed systems, and also for predicting product properties and system behavior during operation.
Resumo:
OBJECTIVE: This study proposes a new approach that considers uncertainty in predicting and quantifying the presence and severity of diabetic peripheral neuropathy. METHODS: A rule-based fuzzy expert system was designed by four experts in diabetic neuropathy. The model variables were used to classify neuropathy in diabetic patients, defining it as mild, moderate, or severe. System performance was evaluated by means of the Kappa agreement measure, comparing the results of the model with those generated by the experts in an assessment of 50 patients. Accuracy was evaluated by an ROC curve analysis obtained based on 50 other cases; the results of those clinical assessments were considered to be the gold standard. RESULTS: According to the Kappa analysis, the model was in moderate agreement with expert opinions. The ROC analysis (evaluation of accuracy) determined an area under the curve equal to 0.91, demonstrating very good consistency in classifying patients with diabetic neuropathy. CONCLUSION: The model efficiently classified diabetic patients with different degrees of neuropathy severity. In addition, the model provides a way to quantify diabetic neuropathy severity and allows a more accurate patient condition assessment.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.