366 resultados para BARTLETT CORRECTION
Resumo:
The thesis investigates “where were the auditors in asset securitizations”, a criticism of the audit profession before and after the onset of the global financial crisis (GFC). Asset securitizations increase audit complexity and audit risks, which are expected to increase audit fees. Using US bank holding company data from 2003 to 2009, this study examines the association between asset securitization risks and audit fees, and its changes during the global financial crisis. The main test is based on an ordinary least squares (OLS) model, which is adapted from the Fields et al. (2004) bank audit fee model. I employ a principal components analysis to address high correlations among asset securitization risks. Individual securitization risks are also separately tested. A suite of sensitivity tests indicate the results are robust. These include model alterations, sample variations, further controls in the tests, and correcting for the securitizer self-selection problem. A partial least squares (PLS) path modelling methodology is introduced as a separate test, which allows for high intercorrelations, self-selection correction, and sequential order hypotheses in one simultaneous model. The PLS results are consistent with the main results. The study finds significant and positive associations between securitization risks and audit fees. After the commencement of the global financial crisis in 2007, there was an increased focus on the role of audits on asset securitization risks resulting from bank failures; therefore I expect that auditors would become more sensitive to bank asset securitization risks after the commencement of the crisis. I find that auditors appear to focus on different aspects of asset securitization risks during the crisis and that auditors appear to charge a GFC premium for banks. Overall, the results support the view that auditors consider asset securitization risks and market changes, and adjust their audit effort and risk considerations accordingly.
Resumo:
The Common Scrambling Algorithm Stream Cipher (CSASC) is a shift register based stream cipher designed to encrypt digital video broadcast. CSA-SC produces a pseudo-random binary sequence that is used to mask the contents of the transmission. In this paper, we analyse the initialisation process of the CSA-SC keystream generator and demonstrate weaknesses which lead to state convergence, slid pairs and shifted keystreams. As a result, the cipher may be vulnerable to distinguishing attacks, time-memory-data trade-off attacks or slide attacks.
Resumo:
Exhaust emissions from motor vehicles vary widely and depend on factors such as engine operating conditions, fuel, age, mileage and service history. A method has been devised to rapidly identify high-polluting vehicles as they travel on the road. The method is able to monitor emissions from a large number of vehicles in a short time and avoids the need to conduct expensive and time consuming tests on chassis dynamometers. A sample of the exhaust plume is captured as each vehicle passes a roadside monitoring station and the pollutant emission factors are calculated from the measured concentrations using carbon dioxide as a tracer. Although, similar methods have been used to monitor soot and gaseous mass emissions, to-date it has not been used to monitor particle number emissions from a large fleet of vehicles. This is particularly important as epidemiological studies have shown that particle number concentration is an important parameter in determining adverse health effects. The method was applied to measurements of particle number emissions from individual buses in the Brisbane City Council diesel fleet operating on the South-East Busway. Results indicate that the particle number emission factors are gamma- distributed, with a high proportion of the emissions being emitted by a small percentage of the buses. Although most of the high-emitters are the oldest buses in the fleet, there are clear exceptions, with some newer buses emitting as much. We attribute this to their recent service history, particularly pertaining to improper tuning of the engines. We recommend that a targeted correction program would be a highly effective measure in mitigating urban environmental pollution.
Resumo:
We present CHURNs, a method for providing freshness and authentication assurances to human users. In computer-to-computer protocols, it has long been accepted that assurances of freshness such as random nonces are required to prevent replay attacks. Typically, no such assurance of freshness is presented to a human in a human-and-computer protocol. A Computer–HUman Recognisable Nonce (CHURN) is a computer-aided random sequence that the human has a measure of control over and input into. Our approach overcomes limitations such as ‘humans cannot do random’ and that humans will follow the easiest path. Our findings show that CHURNs are significantly more random than values produced by unaided humans; that humans may be used as a second source of randomness, and we give measurements as to how much randomness can be gained from humans using our approach; and that our CHURN-generator makes the user feel more in control, thus removing the need for complete trust in devices and underlying protocols. We give an example of how a CHURN may be used to provide assurances of freshness and authentication for humans in a widely used protocol.
Resumo:
Purpose Small field x-ray beam dosimetry is difficult due to a lack of lateral electronic equilibrium, source occlusion, high dose gradients and detector volume averaging. Currently there is no single definitive detector recommended for small field dosimetry. The objective of this work was to evaluate the performance of a new commercial synthetic diamond detector, namely the PTW 60019 microDiamond, for the dosimetry of small x-ray fields as used in stereotactic radiosurgery (SRS). Methods Small field sizes were defined by BrainLAB circular cones (4 – 30 mm diameter) on a Novalis Trilogy linear accelerator and using the 6 MV SRS x-ray beam mode for all measurements. Percentage depth doses were measured and compared to an IBA SFD and a PTW 60012 E diode. Cross profiles were measured and compared to an IBA SFD diode. Field factors, Ω_(Q_clin,Q_msr)^(f_clin,f_msr ), were calculated by Monte Carlo methods using BEAMnrc and correction factors, k_(Q_clin,Q_msr)^(f_clin,f_msr ), were derived for the PTW 60019 microDiamond detector. Results For the small fields of 4 to 30 mm diameter, there were dose differences in the PDDs of up to 1.5% when compared to an IBA SFD and PTW 60012 E diode detector. For the cross profile measurements the penumbra values varied, depending upon the orientation of the detector. The field factors, Ω_(Q_clin,Q_msr)^(f_clin,f_msr ), were calculated for these field diameters at a depth of 1.4 cm in water and they were within 2.7% of published values for a similar linear accelerator. The corrections factors, k_(Q_clin,Q_msr)^(f_clin,f_msr ), were derived for the PTW 60019 microDiamond detector. Conclusions We conclude that the new PTW 60019 microDiamond detector is generally suitable for relative dosimetry in small 6 MV SRS beams for a Novalis Trilogy linear equipped with circular cones.