251 resultados para Magazzino automatico utensili modellazione incremento prestazioni FDM prove sperimentali
Resumo:
Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.
Resumo:
The use of ultra-thin films as dressings for cutaneous wounds could prove advantageous in terms of better conformity to wound topography and improved vapour transmission. For this purpose, ultra-thin poly(epsilon-caprolactone) (PCL) films of 5-15 microm thickness were fabricated via a biaxial stretching technique. To evaluate their in vivo biocompatibility and feasibility as an external wound dressing, PCL films were applied over full and partial-thickness wounds in rat and pig models. Different groups of PCL films were used: untreated, NaOH-treated, untreated with fibrin, NaOH-treated with perforations, and NaOH-treated with fibrin and S-nitrosoglutathione. Wounds with no external dressings were used as controls. Wound contraction rate, histology and biomechanical analyses were carried out. Wounds re-epithelialized completely at a comparable rate. Formation of a neo-dermal layer and re-epithelialization were observed in all the wounds. A lower level of fibrosis was observed when PCL films were used, compared to the control wounds. Ultimate tensile strength of the regenerated tissue in rats reached 50-60% of that in native rat skin. Results indicated that biaxially-stretched PCL films did not induce inflammatory reactions when used in vivo as a wound dressing and supported the normal wound healing process in full and partial-thickness wounds.
Resumo:
What is a record producer? There is a degree of mystery and uncertainty about just what goes on behind the studio door. Some producers are seen as Svengali-like figures manipulating artists into mass consumer product. Producers are sometimes seen as mere technicians whose job is simply to set up a few microphones and press the record button. Close examination of the recording process will show how far this is from a complete picture. Artists are special—they come with an inspiration, and a talent, but also with a variety of complications, and in many ways a recording studio can seem the least likely place for creative expression and for an affective performance to happen. The task of the record producer is to engage with these artists and their songs and turn these potentials into form through the technology of the recording studio. The purpose of the exercise is to disseminate this fixed form to an imagined audience—generally in the hope that this audience will prove to be real. Finding an audience is the role of the record company. A record producer must also engage with the commercial expectations of the interests that underwrite a recording. This dissertation considers three fields of interest in the recording process: the performer and the song; the technology of the recording context; and the commercial ambitions of the record company—and positions the record producer as a nexus at the interface of all three. The author reports his structured recollection of five recordings, with three different artists, that all achieved substantial commercial success. The processes are considered from the author’s perspective as the record producer, and from inception of the project to completion of the recorded work. What were the processes of engagement? Do the actions reported conform to the template of nexus? This dissertation proposes that in all recordings the function of producer/nexus is present and necessary—it exists in the interaction of the artistry and the technology. The art of record production is to engage with these artists and the songs they bring and turn these potentials into form.
Resumo:
The changes of economic status in Malaysia have lead to many psychosocial problems especially among the young people. Counselling and psychotherapy have been seen as one of the solutions that are practiced in Western Culture. Most counselling theorists believe that their theory is universal however there is limited research to prove it. This paper will describe an ongoing study conducted in Malaysia about the applicability of one Western counselling Theory, Bowen’s family theory the Differentiation of self levels in the family allow a person to both leave the family’s boundaries in search of uniqueness and continually return to the family in order to further establish a sense of belonging. In addition Bowen believed that this comprised of four measures: Differentiation of Self (DSI), Family Inventory of Live Event (ILE), Depression Anxiety and Stress Scale (DASS) and Connor-Davidson Resilience Scale (CD-RISC). Preliminary findings are discussed and the implication in enhancing the quality of teaching family counselling in universities explored.
Resumo:
Background: Young motorists engaging in anti-social and often dangerous driving manoeuvres (which is often referred to as “hooning” within Australia) is an increasing road safety problem. While anecdotal evidence suggests that such behaviour is positively linked with crash involvement, researchers have yet to examine whether younger drivers who deliberately break road rules and drive in an erratic manner (usually with peers) are in fact over represented in crash statistics. This paper outlines research that aimed to identify the characteristics of individuals most likely to engaging in hooning behaviours, as well as examine the frequency of such driving behaviours and if such activity is linked with self-reported crash involvement.---------- Methods: A total of 717 young drivers in Queensland voluntarily completed a questionnaire to investigate their driving behaviour and crash history.---------- Results: Quantitative analysis of the data revealed that almost half the sample reported engaging in some form of “hooning” behaviour at least once in their lifetime, although only 4% indicated heavy participation in the behaviour e.g., >50 times. Street racing was the most common activity reported by participants followed by “drifting” and then “burnouts”. Logistic regression analysis indicated that being younger and a male was predictive of reporting such anti-social driving behaviours, and importantly, a trend was identified between such behaviour and self-reported crash involvement.---------- Conclusions: This research provides preliminary evidence that younger male drivers are more likely to engage in dangerous driving behaviours, which ultimately may prove to increase their overall risk of becoming involved in a crash. This paper will further outline the study findings in regards to current enforcement efforts to deter such driving activity as well as provide direction for future research efforts in this area.---------- Research highlights: ► The self-reported driving behaviours of 717 younger Queensland drivers were examined to investigate the relationship between deliberately breaking road rules and self-reported crash involvement. ► Younger male drivers were most likely to engage in such aberrant driving behaviours and a trend was identified between such behaviour and self-reported crash involvement.
Resumo:
Recently, the numerical modelling and simulation for fractional partial differential equations (FPDE), which have been found with widely applications in modern engineering and sciences, are attracting increased attentions. The current dominant numerical method for modelling of FPDE is the explicit Finite Difference Method (FDM), which is based on a pre-defined grid leading to inherited issues or shortcomings. This paper aims to develop an implicit meshless approach based on the radial basis functions (RBF) for numerical simulation of time fractional diffusion equations. The discrete system of equations is obtained by using the RBF meshless shape functions and the strong-forms. The stability and convergence of this meshless approach are then discussed and theoretically proven. Several numerical examples with different problem domains are used to validate and investigate accuracy and efficiency of the newly developed meshless formulation. The results obtained by the meshless formations are also compared with those obtained by FDM in terms of their accuracy and efficiency. It is concluded that the present meshless formulation is very effective for the modelling and simulation for FPDE.
Resumo:
This thesis investigates the coefficient of performance (COP) of a hybrid liquid desiccant solar cooling system. This hybrid cooling system includes three sections: 1) conventional air-conditioning section; 2) liquid desiccant dehumidification section and 3) air mixture section. The air handling unit (AHU) with mixture variable air volume design is included in the hybrid cooling system to control humidity. In the combined system, the air is first dehumidified in the dehumidifier and then mixed with ambient air by AHU before entering the evaporator. Experiments using lithium chloride as the liquid desiccant have been carried out for the performance evaluation of the dehumidifier and regenerator. Based on the air mixture (AHU) design, the electrical coefficient of performance (ECOP), thermal coefficient of performance (TCOP) and whole system coefficient of performance (COPsys) models used in the hybrid liquid desiccant solar cooing system were developed to evaluate this system performance. These mathematical models can be used to describe the coefficient of performance trend under different ambient conditions, while also providing a convenient comparison with conventional air conditioning systems. These models provide good explanations about the relationship between the performance predictions of models and ambient air parameters. The simulation results have revealed the coefficient of performance in hybrid liquid desiccant solar cooling systems substantially depends on ambient air and dehumidifier parameters. Also, the liquid desiccant experiments prove that the latent component of the total cooling load requirements can be easily fulfilled by using the liquid desiccant dehumidifier. While cooling requirements can be met, the liquid desiccant system is however still subject to the hysteresis problems.
Resumo:
To achieve best environmental management practice in Queensland, effort needs to be extended into the private sector. A Regional Landscape Strategy compiled for any substantial new proposal must identify the most promising technique(s) (from an available tool kit of 13) by which a developer (of any type) is more likely to sustain on-site resources while assisting government deliver its future plans in any region of the State. Offsetting may prove to be one of the most effective of these tools. However, policy must address‘offset land mitigation’, whereby the necessary financial incentives are introduced. Practicable methods by which offset sites can be selected, and measurement of their consequent environmental benefit, have now been devised and tested to assist this process.
Resumo:
We assess the performance of an exponential integrator for advancing stiff, semidiscrete formulations of the unsaturated Richards equation in time. The scheme is of second order and explicit in nature but requires the action of the matrix function φ(A) where φ(z) = [exp(z) - 1]/z on a suitability defined vector v at each time step. When the matrix A is large and sparse, φ(A)v can be approximated by Krylov subspace methods that require only matrix-vector products with A. We prove that despite the use of this approximation the scheme remains second order. Furthermore, we provide a practical variable-stepsize implementation of the integrator by deriving an estimate of the local error that requires only a single additional function evaluation. Numerical experiments performed on two-dimensional test problems demonstrate that this implementation outperforms second-order, variable-stepsize implementations of the backward differentiation formulae.
Resumo:
The knowledge and skills of fashion and textiles design have traditionally been transferred through the indenture of an apprentice to a master. This relationship relied heavily on the transfer of explicit methods of design and making but also on the transfer of tacit knowledge, explained by Michael Polanyi as knowledge that cannot be explicitly known. By watching the master and emulating his efforts in the presence of his example, the apprentice unconsciously picks up the rules of the art, including those which are not explicitly known to the master himself (Polanyi, 1962 p.53). However, it has been almost half a century since Michael Polanyi defined the tacit dimension as a state in which “we can know more than we can tell” (Polanyi, 1967 p.4) at a time when the accepted means of ‘telling’ was through academic writing and publishing in hardcopy format. The idea that tacit knowledge transfer involves a one to one relationship between apprentice and master would appear to have dire consequences for a discipline, such as fashion design, where there is no such tradition of academic writing. This paper counters this point of view by providing examples of strategies currently being employed in online environments (principally through ‘craft’) and explains how these methods might prove useful to support tacit knowledge transfer in respect to academic research within the field of fashion design, and in the wider academic community involved in creative practice research. A summary of the implications of these new ideas for contemporary fashion research will conclude the paper.
Resumo:
Guanxi has become a common term in the wider business environment and has attracted the increasing attention of researchers. Despite this, a consistent understanding of the concept continues to prove elusive. We review the extant business literature to highlight the major inconsistencies in the way guanxi is currently conceptualized: the breadth, linguistic-cultural depth, temporality, and level of analysis. We conclude with a clearer conceptualization of guanxi which separates the core elements from antecedents and consequences of guanxi. Furthermore, we compare and contrast guanxi with western correlates such as social networks and social capitals to further consolidate our understanding of guanxi.
Resumo:
A system for agroinoculating rice tungro bacilliform virus (RTBV), one of the two viruses of the rice tungro disease complex, has been optimised. A nontumour-inducing strain of Agrobacterium (pGV3850) was used in order to conform with biosafety regulations. Fourteen-day-old seedlings survived the mechanical damage of the technique and were still young enough to support virus replication. The level of the bacterial inoculum was important to obtain maximum infection, with a high inoculum level (0.5 × 1012 cells/ml) resulting in up to 100% infection of a susceptible variety that was comparable with infection by insect transmission. Agroinoculation with RTBV was successful for all three rice cultivarss tested; TN1 (tungro susceptible), Balimau Putih (tungro tolerant), and IR26 (RTSV and vector resistant). Agroinoculation enables resistance to RTBV to be distinguished from resistance to the leafhopper vector of the virus, and should prove useful in screening rice germplasm, breeding materials, and transgenic rice lines.
Resumo:
Association rule mining has contributed to many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we first propose a definition for redundancy, then propose a concise representation, called a Reliable basis, for representing non-redundant association rules. The Reliable basis contains a set of non-redundant rules which are derived using frequent closed itemsets and their generators instead of using frequent itemsets that are usually used by traditional association rule mining approaches. An important contribution of this paper is that we propose to use the certainty factor as the criterion to measure the strength of the discovered association rules. Using this criterion, we can ensure the elimination of as many redundant rules as possible without reducing the inference capacity of the remaining extracted non-redundant rules. We prove that the redundancy elimination, based on the proposed Reliable basis, does not reduce the strength of belief in the extracted rules. We also prove that all association rules, their supports and confidences, can be retrieved from the Reliable basis without accessing the dataset. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules. We also conduct experiments on the application of association rules to the area of product recommendation. The experimental results show that the non-redundant association rules extracted using the proposed method retain the same inference capacity as the entire rule set. This result indicates that using non-redundant rules only is sufficient to solve real problems needless using the entire rule set.
Resumo:
As detailed in Whitehead, Bunker and Chung (2011), a congestion-charging scheme provides a mechanism to combat congestion whilst simultaneously generating revenue to improve both the road and public transport networks. The aim of this paper is to assess the feasibility of implementing a congestion-charging scheme in the city of Brisbane in Australia and determine the potential effects of this initiative. In order to so, a congestion-charging scheme was designed for Brisbane and modelled using the Brisbane Strategic Transport Model with a base line year of 2026. This paper argues that the implementation of this initiative would prove to be effective in reducing the cities road congestion and increasing the overall sustainability of the region.
Resumo:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.