943 resultados para diagnostica energetica, diagnostica strutturale, prove non distruttive, edifici storici
Resumo:
There has been much conjecture of late as to whether the patentable subject matter standard contains a physicality requirement. The issue came to a head when the Federal Circuit introduced the machine-or-transformation test in In re Bilski and declared it to be the sole test for determining subject matter eligibility. Many commentators criticized the test, arguing that it is inconsistent with Supreme Court precedent and the need for the patent system to respond appropriately to all new and useful innovation in whatever form it arises. Those criticisms were vindicated when, on appeal, the Supreme Court in Bilski v. Kappos dispensed with any suggestion that the patentable subject matter test involves a physicality requirement. In this article, the issue is addressed from a normative perspective: it asks whether the patentable subject matter test should contain a physicality requirement. The conclusion reached is that it should not, because such a limitation is not an appropriate means of encouraging much of the valuable innovation we are likely to witness during the Information Age. It is contended that it is not only traditionally-recognized mechanical, chemical and industrial manufacturing processes that are patent eligible, but that patent eligibility extends to include non-machine implemented and non-physical methods that do not have any connection with a physical device and do not cause a physical transformation of matter. Concerns raised that there is a trend of overreaching commoditization or propertization, where the boundaries of patent law have been expanded too far, are unfounded since the strictures of novelty, nonobviousness and sufficiency of description will exclude undeserving subject matter from patentability. The argument made is that introducing a physicality requirement will have unintended adverse effects in various fields of technology, particularly those emerging technologies that are likely to have a profound social effect in the future.
Resumo:
Mesoporous bioactive glass (MBG) is a new class of biomaterials with a well-ordered nanochannel structure, whose in vitro bioactivity is far superior than that of non-mesoporous bioactive glass (BG); the material's in vivo osteogenic properties are, however, yet to be assessed. Porous silk scaffolds have been used for bone tissue engineering, but this material's osteoconductivity is far from optimal. The aims of this study were to incorporate MBG into silk scaffolds in order to improve their osteoconductivity and then to compare the effect of MBG and BG on the in vivo osteogenesis of silk scaffolds. MBG/silk and BG/silk scaffolds with a highly porous structure were prepared by a freeze-drying method. The mechanical strength, in vitro apatite mineralization, silicon ion release and pH stability of the composite scaffolds were assessed. The scaffolds were implanted into calvarial defects in SCID mice and the degree of in vivo osteogenesis was evaluated by microcomputed tomography (μCT), hematoxylin and eosin (H&E) and immunohistochemistry (type I collagen) analyses. The results showed that MBG/silk scaffolds have better physiochemical properties (mechanical strength, in vitro apatite mineralization, Si ion release and pH stability) compared to BG/silk scaffolds. MBG and BG both improved the in vivo osteogenesis of silk scaffolds. μCT and H&E analyses showed that MBG/silk scaffolds induced a slightly higher rate of new bone formation in the defects than did BG/silk scaffolds and immunohistochemical analysis showed greater synthesis of type I collagen in MBG/silk scaffolds compared to BG/silk scaffolds.
Resumo:
Association rule mining has contributed to many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we first propose a definition for redundancy, then propose a concise representation, called a Reliable basis, for representing non-redundant association rules. The Reliable basis contains a set of non-redundant rules which are derived using frequent closed itemsets and their generators instead of using frequent itemsets that are usually used by traditional association rule mining approaches. An important contribution of this paper is that we propose to use the certainty factor as the criterion to measure the strength of the discovered association rules. Using this criterion, we can ensure the elimination of as many redundant rules as possible without reducing the inference capacity of the remaining extracted non-redundant rules. We prove that the redundancy elimination, based on the proposed Reliable basis, does not reduce the strength of belief in the extracted rules. We also prove that all association rules, their supports and confidences, can be retrieved from the Reliable basis without accessing the dataset. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules. We also conduct experiments on the application of association rules to the area of product recommendation. The experimental results show that the non-redundant association rules extracted using the proposed method retain the same inference capacity as the entire rule set. This result indicates that using non-redundant rules only is sufficient to solve real problems needless using the entire rule set.
Resumo:
Vehicular traffic in urban areas may adversely affect urban water quality through the build-up of traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) on road surfaces. The characterisation of the build-up processes is the key to developing mitigation measures for the removal of such pollutants from urban stormwater. An in-depth analysis of the build-up of SVOCs and NVOCs was undertaken in the Gold Coast region in Australia. Principal Component Analysis (PCA) and Multicriteria Decision tools such as PROMETHEE and GAIA were employed to understand the SVOC and NVOC build-up under combined traffic scenarios of low, moderate, and high traffic in different land uses. It was found that congestion in the commercial areas and use of lubricants and motor oils in the industrial areas were the main sources of SVOCs and NVOCs on urban roads, respectively. The contribution from residential areas to the build-up of such pollutants was hardly noticeable. It was also revealed through this investigation that the target SVOCs and NVOCs were mainly attached to particulate fractions of 75 to 300 µm whilst the redistribution of coarse fractions due to vehicle activity mainly occurred in the >300 µm size range. Lastly, under combined traffic scenario, moderate traffic with average daily traffic ranging from 2300 to 5900 and average congestion of 0.47 was found to dominate SVOC and NVOC build-up on roads.
Resumo:
This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
The potential restriction to effective dispersal and gene flow caused by habitat fragmentation can apply to multiple levels of evolutionary scale; from the fragmentation of ancient supercontinents driving diversification and speciation on disjunct landmasses, to the isolation of proximate populations as a result of their inability to cross intervening unsuitable habitat. Investigating the role of habitat fragmentation in driving diversity within and among taxa can thus include inferences of phylogenetic relationships among taxa, assessments of intraspecific phylogeographic structure and analyses of gene flow among neighbouring populations. The proposed Gondwanan clade within the chironomid (non-biting midge) subfamily Orthocladiinae (Diptera: Chironomidae) represents a model system for investigating the role that population fragmentation and isolation has played at different evolutionary scales. A pilot study by Krosch et al (2009) indentified several highly divergent lineages restricted to ancient rainforest refugia and limited gene flow among proximate sites within a refuge for one member of this clade, Echinocladius martini Cranston. This study provided a framework for investigating the evolutionary history of this taxon and its relatives more thoroughly. Populations of E. martini were sampled in the Paluma bioregion of northeast Queensland to investigate patterns of fine-scale within- and among-stream dispersal and gene flow within a refuge more rigorously. Data was incorporated from Krosch et al (2009) and additional sites were sampled up- and downstream of the original sites. Analyses of genetic structure revealed strong natal site fidelity and high genetic structure among geographically proximate streams. Little evidence was found for regular headwater exchange among upstream sites, but there was distinct evidence for rare adult flight among sites on separate stream reaches. Overall, however, the distribution of shared haplotypes implied that both larval and adult dispersal was largely limited to the natal stream channel. Patterns of regional phylogeographic structure were examined in two related austral orthoclad taxa – Naonella forsythi Boothroyd from New Zealand and Ferringtonia patagonica Sæther and Andersen from southern South America – to provide a comparison with patterns revealed in their close relative E. martini. Both taxa inhabit tectonically active areas of the southern hemisphere that have also experienced several glaciation events throughout the Plio-Pleistocene that are thought to have affected population structure dramatically in many taxa. Four highly divergent lineages estimated to have diverged since the late Miocene were revealed in each taxon, mirroring patterns in E. martini; however, there was no evidence for local geographical endemism, implying substantial range expansion post-diversification. The differences in pattern evident among the three related taxa were suggested to have been influenced by variation in the responses of closed forest habitat to climatic fluctuations during interglacial periods across the three landmasses. Phylogeographic structure in E. martini was resolved at a continental scale by expanding upon the sampling design of Krosch et al (2009) to encompass populations in southeast Queensland, New South Wales and Victoria. Patterns of phylogeographic structure were consistent with expectations and several previously unrecognised lineages were revealed from central- and southern Australia that were geographically endemic to closed forest refugia. Estimated divergence times were congruent with the timing of Plio-Pleistocene rainforest contractions across the east coast of Australia. This suggested that dispersal and gene flow of E. martini among isolated refugia was highly restricted and that this taxon was susceptible to the impacts of habitat change. Broader phylogenetic relationships among taxa considered to be members of this Gondwanan orthoclad group were resolved in order to test expected patterns of evolutionary affinities across the austral continents. The inferred phylogeny and estimated divergence times did not accord with expected patterns based on the geological sequence of break-up of the Gondwanan supercontinent and implied instead several transoceanic dispersal events post-vicariance. Difficulties in appropriate taxonomic sampling and accurate calibration of molecular phylogenies notwithstanding, the sampling regime implemented in the current study has been the most intensive yet performed for austral members of the Orthocladiinae and unsurprisingly has revealed both novel taxa and phylogenetic relationships within and among described genera. Several novel associations between life stages are made here for both described and previously unknown taxa. Investigating evolutionary relationships within and among members of this clade of proposed Gondwanan orthoclad taxa has demonstrated that a complex interaction between historical population fragmentation and dispersal at several levels of evolutionary scale has been important in driving diversification in this group. While interruptions to migration, colonisation and gene flow driven by population fragmentation have clearly contributed to the development and maintenance of much of the diversity present in this group, long-distance dispersal has also played a role in influencing diversification of continental biotas and facilitating gene flow among disjunct populations.
Resumo:
This paper reviews the current status of the application of optical non-destructive methods, particularly infrared (IR) and near infrared (NIR), in the evaluation of the physiological integrity of articular cartilage. It is concluded that a significant amount of work is still required in order to achieve specificity and clinical applicability of these methods in the assessment and treatment of dysfunctional articular joints.
Resumo:
My aim in this paper is to challenge the increasingly common view in the literature that the law on end of life decision making is in disarray and is in need of urgent reform. My argument is that this assessment of the law is based on assumptions about the relationship between the identity of the defendant and their conduct, and about the nature of causation, which, on examination, prove to be indefensible. I then provide a clarification of the relationship between causation and omissions which proves that the current legal position does not need modification, at least on the grounds that are commonly advanced for the converse view. This enables me, in conclusion, to clarify important conceptual and moral differences between withholding, refusing and withdrawing life-sustaining measures on the one hand, and assisted suicide and euthanasia, on the other.
Resumo:
This paper reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System interface. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs’ SOLO taxonomy. However, written assessment items require extensive resources for marking, and are susceptible to copying as well as marking inconsistencies for large classes. This project developed an assessment tool which is valid, reliable and sustainable and that addresses the issues identified above. The tool provides each student with an assignment assessing the same learning outcomes, but containing different questions, with responses in the form of words or numbers. Practice questions are available, enabling students to obtain feedback on their approach before submitting their assignment. Thus, the tool incorporates automatic marking (essential for large classes), randomised tasks to each student (reducing copying), the capacity to give credit for working (feedback on the application of theory), and the capacity to target higher order learning outcomes by requiring students to derive their answers rather than choosing them. Results and feedback from students are presented, along with technical implementation details.
Resumo:
Abstract This study investigated depressive symptom and interpersonal relatedness outcomes from eight sessions of manualized narrative therapy for 47 adults with major depressive disorder. Post-therapy, depressive symptom improvement (d=1.36) and proportions of clients achieving reliable improvement (74%), movement to the functional population (61%), and clinically significant improvement (53%) were comparable to benchmark research outcomes. Post-therapy interpersonal relatedness improvement (d=.62) was less substantial than for symptoms. Three-month follow-up found maintenance of symptom, but not interpersonal gains. Benchmarking and clinical significance analyses mitigated repeated measure design limitations, providing empirical evidence to support narrative therapy for adults with major depressive disorder. RÉSUMÉ Cette étude a investigué les symptômes dépressifs et les relations interpersonnels d'une thérapie narrative en huit séances chez 47 adultes souffrant d'un trouble dépressif majeur. Après la thérapie, l'amélioration des symptômes dépressifs (d=1.36) et la proportion de clients atteignant un changement significatif (74%), le mouvement vers la population fonctionnelle (61%), enfin l'amélioration clinique significative (53%) étaient comparables aux performances des études de résultats. L'amélioration des relations interpersonnelles (d=0.62) était inférieure à l'amélioration symptomatique. Le suivi à trois mois montrait un maintien des gains symptomatiques mais pas pour les relations interpersonnelles. L’évaluation des performances et les analyses de significativité clinique modèrent les limitations du plan de recherche à mesures répétées et apportent une preuve empirique qui étaie l'efficacité des thérapies narratives pour des adultes avec un trouble dépressif majeur. Este estudo investigou sintomas depressivos e resultados interpessoais relacionados em oito sessões de terapia narrativa manualizada para 47 adultos com perturbação depressiva major. No pós terapia, melhoria de sintomas depressivos (d=1,36) e proporção de clientes que alcançam melhoria válida (74%), movimento para a população funcional (61%) e melhoria clinicamente significativa (53%) foram comparáveis com os resultados da investigação reportados. As melhorias pós terapia nos resultados interpessoais relacionados (d=.62) foi menos substancial do que para os sintomas. Aos três meses de seguimento houve a manutenção dos sintomas mas não dos ganhos interpessoais. As análises de benchemarking e de melhoria clinicamente significativas atenuam as limitações de um design de medidas repetidas, fornecendo evidência empírica para a terapia narrativa para adultos com perturbação depressiva major. Questo lavoro ha valutato i sintomi depressivi e gli outcome nella capacità di relazionarsi a livello interpersonale in 8 sedute di psicoterapia narrativa manualizzata in un gruppo di 47 adulti con depressione maggiore. I risultati ottenuti relativamente a: post terapy, miglioramento dei sintomi depressivi (d_1.36), proporzione di pazienti che hanno raggiunto un miglioramento affidabile e consistente (74%), movimento verso il funzionamento atteso nella popolazione (61%) e miglioramento clinicamente significativo (53%) sono paragonabili ai valori di riferimento della ricerca sull'outcome. I miglioramento della capacità di relazionarsi valutata alla fine del trattamento (d_.62) si è rivelata meno sostanziale rispetto ai sintomi. Un follow-up dopo 3 mesi ha dimostrato che il miglioramento sintomatologico è stato mantenuto, ma non quello degli obiettivi interpersonali. Valori di riferimento e analisi della significatività clinica hanno fatto fronte ai limiti del disegno a misure ripetute, offrendo prove empiriche sulla rilevanza della terapia narrativa in pazienti adulti con depressione maggiore