154 resultados para Rank and file unionism
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Ultrafine particles (UFPs, <100 nm) are produced in large quantities by vehicular combustion and are implicated in causing several adverse human health effects. Recent work has suggested that a large proportion of daily UFP exposure may occur during commuting. However, the determinants, variability and transport mode-dependence of such exposure are not well-understood. The aim of this review was to address these knowledge gaps by distilling the results of ‘in-transit’ UFP exposure studies performed to-date, including studies of health effects. We identified 47 exposure studies performed across 6 transport modes: automobile, bicycle, bus, ferry, rail and walking. These encompassed approximately 3000 individual trips where UFP concentrations were measured. After weighting mean UFP concentrations by the number of trips in which they were collected, we found overall mean UFP concentrations of 3.4, 4.2, 4.5, 4.7, 4.9 and 5.7 × 10^4 particles cm^-3 for the bicycle, bus, automobile, rail, walking and ferry modes, respectively. The mean concentration inside automobiles travelling through tunnels was 3.0 × 10^5 particles cm^-3. While the mean concentrations were indicative of general trends, we found that the determinants of exposure (meteorology, traffic parameters, route, fuel type, exhaust treatment technologies, cabin ventilation, filtration, deposition, UFP penetration) exhibited marked variability and mode-dependence, such that it is not necessarily appropriate to rank modes in order of exposure without detailed consideration of these factors. Ten in-transit health effects studies have been conducted and their results indicate that UFP exposure during commuting can elicit acute effects in both healthy and health-compromised individuals. We suggest that future work should focus on further defining the contribution of in-transit UFP exposure to total UFP exposure, exploring its specific health effects and investigating exposures in the developing world. Keywords: air pollution; transport modes; acute health effects; travel; public transport
Resumo:
At the international level, the higher education sector is currently being subjected to increased calls for public accountability and the current move by the OECD to rank universities based on the quality of their teaching and learning outcomes. At the national level, Australian universities and their teaching staff face numerous challenges including financial restrictions, increasing student numbers and the reality of an increasingly diverse student population. The Australian higher education response to these competing policy and accreditation demands focuses on precise explicit systems and procedures which are inflexible and conservative and which ignore the fact that assessment is the single biggest influence on how students approach their learning. By seriously neglecting the quality of student learning outcomes, assessment tasks are often failing to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design, which includes new paradigms of student engagement and learning and pedagogically based technologies have the capacity to provide some measure of relief from these internal and external tensions by significantly enhancing the learning experience for an increasingly time-poor population of students. That is, the assessment process has the ability to deliver program objectives and active learning through a knowledge transfer process which increases student participation and engagement. This social constructivist view highlights the importance of peer review in assisting students to participate and collaborate as equal members of a community of scholars with both their peers and academic staff members. As a result of increasing the student’s desire to learn, peer review leads to more confident, independent and reflective learners who also become more skilled at making independent judgements of their own and others' work. Within this context, in Case Study One of this project, a summative, peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes achieved included: student failure rates declined 15%; tutorial participation increased fourfold; tutorial engagement increased six-fold; and there was a 100% student-based approval rating for the retention of the assessment task. However, in stark contrast to the positive student response, staff issues related to the loss of research time associated with the administration of the peer-review process threatened its survival. This paper contributes to the core conference topics of new trends and experiences in undergraduate assessment education and in terms of innovative, on-line, learning and teaching practices, by elaborating the Case Study Two “solution” generated to this dilemma. At the heart of the resolution is an e-Learning, peer-review process conducted in conjunction with the University of Melbourne which seeks to both create a virtual sense of belonging and to efficiently and effectively meet academic learning objectives with minimum staff involvement. In outlining the significant level of success achieved, student-based qualitative and quantitative data will be highlighted along with staff views in a comparative analysis of the advantages and disadvantages to both students and staff of the staff-led, peer review process versus its on-line counterpart.
Resumo:
In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.
Resumo:
The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) is the largest study of new firm creation ever undertaken in Australia. The project provides an exciting opportunity to fundamentally improve our understanding of independent entrepreneurship in Australia by studying factors that initiate, hinder and facilitate the process of emergence of new economic activities and organisations. The longitudinal project has followed a large random sample of nascent firms (n=625) and young firms (n=559) over a six year period. NFs are on-going start-up efforts while YFs are already established but less than four years old. The study also includes a comparison group of non-founders and over-samples of over 100 high potential start-ups in each category. The CAUSEE dataset file contains hundreds of variables throughout 5 waves of data collection. Extensive documentation on the dataset is available in the related handbook. The CAUSEE project has received significant external funding from the Australian Research Council (DP0666616 and LP0776845); National Australia Bank; BDO Australia, and the Australian Government Department of Industry, Innovation and Science.
Resumo:
Background/aims: Cardiovascular disease (CVD) continues to impose a heavy burden in terms of cost, disability and death in Australia. Recent evidence suggests that increasing remoteness, where cardiac services are scarce, is linked to an increased risk of dying from CVD. Fatal CVD events are reported to be between 20% and 50% higher in rural areas compared to major cities. Method: This project, with its extensive use of Geographic Information Systems (GIS) technology, will rank 11,338 rural and remote population centres to identify geographical ‘hotspots’ where there is likely to be a mismatch between the demand for and actual provision of cardiovascular services. It will, therefore, guide more equitable provision of services to rural and remote communities. Outcomes: The CARDIAC-ARIA project is designed to; map the type and location of cardiovascular services currently available in Australia, relative to the distribution of individuals who currently have symptomatic CVD; determine, by expert panel, what are the minimal requirements for comprehensive cardiovascular health support in metropolitan and rural communities and derive a rating classification based on the Accessibility and Remoteness Index of Australia (ARIA) for each of Australia's 11,338 rural and remote population centres. Conclusion: This unique, innovative and highly collaborative project has the potential to deliver a powerful tool to highlight and combat the burden imposed by cardiovascular disease (CVD) in Australia.
Resumo:
House dust is a heterogeneous matrix, which contains a number of biological materials and particulate matter gathered from several sources. It is the accumulation of a number of semi-volatile and non-volatile contaminants. The contaminants are trapped and preserved. Therefore, house dust can be viewed as an archive of both the indoor and outdoor air pollution. There is evidence to show that on average, people tend to stay indoors most of the time and this increases exposure to house dust. The aims of this investigation were to: " assess the levels of Polycyclic Aromatic Hydrocarbons (PAHs), elements and pesticides in the indoor environment of the Brisbane area; " identify and characterise the possible sources of elemental constituents (inorganic elements), PAHs and pesticides by means of Positive Matrix Factorisation (PMF); and " establish the correlations between the levels of indoor air pollutants (PAHs, elements and pesticides) with the external and internal characteristics or attributes of the buildings and indoor activities by means of multivariate data analysis techniques. The dust samples were collected during the period of 2005-2007 from homes located in different suburbs of Brisbane, Ipswich and Toowoomba, in South East Queensland, Australia. A vacuum cleaner fitted with a paper bag was used as a sampler for collecting the house dust. A survey questionnaire was filled by the house residents which contained information about the indoor and outdoor characteristics of their residences. House dust samples were analysed for three different pollutants: Pesticides, Elements and PAHs. The analyses were carried-out for samples of particle size less than 250 µm. The chemical analyses for both pesticides and PAHs were performed using a Gas Chromatography Mass Spectrometry (GC-MS), while elemental analysis was carried-out by using Inductively-Coupled Plasma-Mass Spectroscopy (ICP-MS). The data was subjected to multivariate data analysis techniques such as multi-criteria decision-making procedures, Preference Ranking Organisation Method for Enrichment Evaluations (PROMETHEE), coupled with Geometrical Analysis for Interactive Aid (GAIA) in order to rank the samples and to examine data display. This study showed that compared to the results from previous works, which were carried-out in Australia and overseas, the concentrations of pollutants in house dusts in Brisbane and the surrounding areas were relatively very high. The results of this work also showed significant correlations between some of the physical parameters (types of building material, floor level, distance from industrial areas and major road, and smoking) and the concentrations of pollutants. Types of building materials and the age of houses were found to be two of the primary factors that affect the concentrations of pesticides and elements in house dust. The concentrations of these two types of pollutant appear to be higher in old houses (timber houses) than in the brick ones. In contrast, the concentrations of PAHs were noticed to be higher in brick houses than in the timber ones. Other factors such as floor level, and distance from the main street and industrial area, also affected the concentrations of pollutants in the house dust samples. To apportion the sources and to understand mechanisms of pollutants, Positive Matrix Factorisation (PMF) receptor model was applied. The results showed that there were significant correlations between the degree of concentration of contaminants in house dust and the physical characteristics of houses, such as the age and the type of the house, the distance from the main road and industrial areas, and smoking. Sources of pollutants were identified. For PAHs, the sources were cooking activities, vehicle emissions, smoking, oil fumes, natural gas combustion and traces of diesel exhaust emissions; for pesticides the sources were application of pesticides for controlling termites in buildings and fences, treating indoor furniture and in gardens for controlling pests attacking horticultural and ornamental plants; for elements the sources were soil, cooking, smoking, paints, pesticides, combustion of motor fuels, residual fuel oil, motor vehicle emissions, wearing down of brake linings and industrial activities.
Resumo:
Software as a Service (SaaS) is a promising approach for Small and Medium Enterprises (SMEs) firms, in particular those that are focused on growing fast and leveraging new technology, due to the potential benefits arising from its inherent scalability, reduced total cost of ownership and the ease of access to global innovations. This paper proposes a dynamic perspective on IS capabilities to understand and explain SMEs sourcing and levering SaaS. The model is derived from combining the IS capabilities of Feeny and Willcocks (1998) and the dynamic capabilities of Teece (2007) and contextualizing it for SMEs and SaaS. We conclude that SMEs sourcing and leveraging SaaS require leadership, business systems thinking and informed buying for sensing and seizing SaaS opportunities and require leadership and vendor development for transforming in terms of aligning and realigning specific tangible and intangible assets.
Resumo:
Glenwood Homes Pty Ltd v Everhard [2008] QSC 192 involved the not uncommon situation where one costs order is made against several parties represented by a single firm of solicitors. Dutney J considered the implications when only some of the parties liable for the payment of the costs file a notice of objection to the costs statement served in respect of those costs.
Resumo:
Pretretament is an essential and expensive processing step for the manufacturing of ethanol from lignocellulosic raw materials. Ionic liquids are a new class of solvents that have the potential to be used as pretreatment agents. The attractive characteristics of ionic liquid pretreatment of lignocellulosics such as thermal stability, dissolution properties, fractionation potential, cellulose decrystallisation capacity and saccharification impact are investigated in this thesis. Dissolution of bagasse with 1-butyl-3-methylimidazolium chloride ([C4mim]Cl) at high temperatures (110 �‹C to 160 �‹C) is investigated as a pretreatment process. Material balances are reported and used along with enzymatic saccharification data to identify optimum pretreatment conditions (150 �‹C for 90 min). At these conditions, the dissolved and reprecipitated material is enriched in cellulose, has a low crystallinity and the cellulose component is efficiently hydrolysed (93 %, 3 h, 15 FPU). At pretreatment temperatures < 150 �‹C, the undissolved material has only slightly lower crystallinity than the starting. At pretreatment temperatures . 150 �‹C, the undissolved material has low crystallinity and when combined with the dissolved material has a saccharification rate and extent similar to completely dissolved material (100 %, 3h, 15 FPU). Complete dissolution is not necessary to maximize saccharification efficiency at temperatures . 150 �‹C. Fermentation of [C4mim]Cl-pretreated, enzyme-saccharified bagasse to ethanol is successfully conducted (85 % molar glucose-to-ethanol conversion efficiency). As compared to standard dilute acid pretreatment, the optimised [C4mim]Cl pretreatment achieves substantially higher ethanol yields (79 % cf. 52 %) in less than half the processing time (pretreatment, saccharification, fermentation). Fractionation of bagasse partially dissolved in [C4mim]Cl to a polysaccharide rich and a lignin rich fraction is attempted using aqueous biphasic systems (ABSs) and single phase systems with preferential precipitation. ABSs of ILs and concentrated aqueous inorganic salt solutions are achievable (e.g. [C4mim]Cl with 200 g L-1 NaOH), albeit they exhibit a number of technical problems including phase convergence (which increases with increasing biomass loading) and deprotonation of imidazolium ILs (5 % - 8 % mol). Single phase fractionation systems comprising lignin solvents / cellulose antisolvents, viz. NaOH (2M) and acetone in water (1:1, volume basis), afford solids with, respectively, 40 % mass and 29 % mass less lignin than water precipitated solids. However, this delignification imparts little increase in saccharification rates and extents of these solids. An alternative single phase fractionation system is achieved simply by using water as an antisolvent. Regulating the water : IL ratio results in a solution that precipitates cellulose and maintains lignin in solution (0.5 water : IL mass ratio) in both [C4mim]Cl and 1-ethyl-3-methylimidazolium acetate ([C2mim]OAc)). This water based fractionation is applied in three IL pretreatments on bagasse ([C4mim]Cl, 1-ethyl-3-methyl imidazolium chloride ([C2mim]Cl) and [C2mim]OAc). Lignin removal of 10 %, 50 % and 60 % mass respectively is achieved although only 0.3 %, 1.5 % and 11.7 % is recoverable even after ample water addition (3.5 water : IL mass ratio) and acidification (pH . 1). In addition the recovered lignin fraction contains 70 % mass hemicelluloses. The delignified, cellulose-rich bagasse recovered from these three ILs is exposed to enzyme saccharification. The saccharification (24 h, 15 FPU) of the cellulose mass in starting bagasse, achieved by these pretreatments rank as: [C2mim]OAc (83 %)>>[C2mim]Cl (53 %)=[C4mim]Cl(53%). Mass balance determinations accounted for 97 % of starting bagasse mass for the [C4mim]Cl pretreatment , 81 % for [C2mim]Cl and 79 %for [C2mim]OAc. For all three IL treatments, the remaining bagasse mass (not accounted for by mass balance determinations) is mainly (more than half) lignin that is not recoverable from the liquid fraction. After pretreatment, 100 % mass of both ions of all three ILs were recovered in the liquid fraction. Compositional characteristics of [C2mim]OAc treated solids such as low lignin, low acetyl group content and preservation of arabinosyl groups are opposite to those of chloride IL treated solids. The former biomass characteristics resemble those imparted by aqueous alkali pretreatment while the latter resemble those of aqueous acid pretreatments. The 100 % mass recovery of cellulose in [C2mim]OAc as opposed to 53 % mass recovery in [C2mim]Cl further demonstrates this since the cellulose glycosidic bonds are protected under alkali conditions. The alkyl chain length decrease in the imidazolium cation of these ILs imparts higher rates of dissolution and losses, and increases the severity of the treatment without changing the chemistry involved.
Resumo:
The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints, including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing most invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
The backfilled GEI : a cross-capture modality gait feature for frontal and side-view gait recognition
Resumo:
In this paper, we propose a novel direction for gait recognition research by proposing a new capture-modality independent, appearance-based feature which we call the Back-filled Gait Energy Image (BGEI). It can can be constructed from both frontal depth images, as well as the more commonly used side-view silhouettes, allowing the feature to be applied across these two differing capturing systems using the same enrolled database. To evaluate this new feature, a frontally captured depth-based gait dataset was created containing 37 unique subjects, a subset of which also contained sequences captured from the side. The results demonstrate that the BGEI can effectively be used to identify subjects through their gait across these two differing input devices, achieving rank-1 match rate of 100%, in our experiments. We also compare the BGEI against the GEI and GEV in their respective domains, using the CASIA dataset and our depth dataset, showing that it compares favourably against them. The experiments conducted were performed using a sparse representation based classifier with a locally discriminating input feature space, which show significant improvement in performance over other classifiers used in gait recognition literature, achieving state of the art results with the GEI on the CASIA dataset.