首页 > > 详细

代写ECS784U/ P编程、辅导c/c++,java编程

Data Analytics ECS784U/ P


i. Students will sometimes upload their coursework and not hit the submit button. Make sure you fully complete the submission process.

ii. A penalty will be applied automatically by the system for late submissions.

a. Lecturers cannot remove the penalty!

b. Penalties can only be challenged via submission of an Extenuating

Circumstances (EC) form which can be found on your Student Support page. All the information you need to know is on that page, including how to submit an EC claim along with the deadline dates and full guidelines.

c. Deadline extensions can only be granted through approval of an EC claim

d. If you submit an EC form, your case will be reviewed by a panel. When the

panel reaches a decision, they will inform both you and the module organiser.

e. If you miss both the submission deadline and the late submission deadline, you

will automatically receive a score of 0.

iii. Submissions via e-mail are not accepted.

iv. The School requires that we set the deadline during a weekday at 10:00 AM.

v. For more details on submission regulations, please refer to your relevant student handbook.

2. Coursework overview

Coursework 2 involves applying causal structure learning to a data set of your choice. You will have to complete a series of tasks, and then answer a set of questions.

This coursework is based on the lecture material covered between Weeks 6 and 12, and on the lab material covered between Weeks 9 and 12.

The coursework must be completed individually.

Submission should be a single file (Word or PDF) containing your answers to each of the questions.

o Ensure you clearly indicate which answer corresponds to what question.

o Data sets and other relevant files are not needed, but do save them in case we

ask to have a look at them.

To complete the coursework, follow the tasks below and answer ALL questions enumerated in Section 3. It is recommended that you read this document in full before you start completing Task 1.

You can start working on your answers as early as you want, but keep in mind that you need to go through up to Week’s 12 material to gain the knowledge needed to answer all the questions.

TASK 1: Set up and reading

a) Visit http://bayesian-ai.eecs.qmul.ac.uk/bayesys/

b) Download the Bayesys user manual.

c) Set up the NetBeans project by following the steps in Section 1 of the manual.

d) Read Sections 2, 3, 4 and 5 of the manual.

e) Skip Section 6.

f) Read Section 7 and repeat the example.

i. Skip subsections 7.3 and 7.4.

g) Read Section 8 and repeat the example.

h) Skip Sections 9, 10, 11 and 12.

i) Read Section 13.

i. Skip subsection 13.6.

TASK 2: Determine research area and prepare data set

You are free to choose or collate your own data set. As with Coursework 1, we recommend that you address a problem you are interested in or related to your professional field. If you are motivated by the subject matter, the project will be more fun for you, and you will likely perform better.

Data requirements:

Size of data: The data set must contain at least 8 variables (yes, penalty applies for using <8 variables). There is no upper-bound restriction on the number of the variables. However, we recommend using <50 variables for the purposes of the coursework to make it much easier for you to visualise the causal graph, and to save computational runtime. While the vast majority of submissions typically rely on relatively small data sets that take a few seconds to ‘learn’, keep in mind some algorithms might take hours to complete learning when given more than 100 variables!

i. You do not need to use a special technique for feature selection – it is up to you to decide which variables to keep. We will not be assessing feature selection decisions.

ii. There is no sample-size restriction and you are free to use a part of the samples. For example, your data set may contain millions of rows and you may want to use fewer to speed-up learning.

Re-use data from CW1: You are allowed to reuse the data set you have prepared for Coursework 1, as long as: a) you consider that data set to be suitable for causal structure learning (refer to Q1 in Section 3), and b) it contains at least 8 variables.
Bayesys repository: You are not allowed to use any of the data sets available in the Bayesys repository for this coursework.

Categorical data: Bayesys assumes the input data are categorical or discrete; e.g., {"low","medium","high"}, {"yellow","blue","green"}, {"<10","10-20","20+"} etc, rather than a continuous range of numbers. If your data set contains continuous variables, Bayesys will consider each value of a continuous variable as a different category. This will cause problems with model dimensionality, leading to poor accuracy and high runtime (if this is not clear why, refer to the Conditional Probability Tables (CPTs) covered in the lectures).

To address this issue, you should discretise all continuous variables to reduce the number of states to reasonable levels. For example, a variable with continuous values ranging from 1 to 100 (e.g., {"14.34", "78.56", "89.23"}) can be discretised into categories such as {"1to20", "21to40", "41to60", "61to80", "81to100"}. Because Coursework 2 is not concerned with data pre-processing, you are free to follow any approach you wish to discretise continuous variables. You could discretise the variables manually as discussed in the above example, or even use k-means which we covered in previous lectures, or any other data discretisation approach. We will not be assessing data discretisation decisions.

Missing data values: The input data set must not contain missing values/empty cells. If it does, the easiest solution would be to replace ALL empty cells with a new category value called missing (or use a different relevant name). This will force the algorithms to consider missing values as an additional state. Alternatively, you could use any data imputation approach, such as MissForest. We will not be assessing data imputation decisions.

Once you ensure your data set is consistent with what has been stated above, rename your data set to trainingData.csv and place it in folder Input.

TASK 3: Draw out your knowledge-based graph

1. Use your own knowledge to produce a knowledge-based causal graph based on the variables you decide to keep in your data set. Remember that this graph is based on your knowledge, and it is not necessarily correct or incorrect. You will compare the graphs learnt by the different algorithms with reference to your knowledge graph.

You may find it easier if you start drawing the graph by hand, and then record the directed relationships in the DAGtrue.csv file. In creating your DAGtrue.csv file, we recommend that you edit one of the sample files that come with Bayesys; e.g., create a copy of the DAGtrue_ASIA.csv file available in the directory Sample input files/Structure learning, then rename the file to DAGtrue.csv, and then replace the directed relationships with those present in your knowledge graph.

NOTE: Your knowledge graph should have a maximum node in-degree of 11; i.e., no node in the graph should have more than 11 parents (this is a library/package restriction).

2. Once you are happy with the graph you have prepared, ensure the file is called DAGtrue.csv and placed in folder Input.

NOTE: If your OS is not showing the file extensions (e.g., .CSV or .PDF), name your file DAGtrue and not DAGtrue.csv; otherwise, the file might end up being called DAGtrue.csv.csv unintentionally (when the file extension is not visible). If this happens, Bayesys will be unable to locate the file.

3. Make a copy of the DAGtrue.csv file, and rename this copy into DAGlearned.csv and place it in folder Output. You can discard the copied file once you complete Task 3.

4. Ensure that your DAGtrue.csv and trainingData.csv (from Task 2) files are in folder Input, and the DAGlearned.csv file is in folder Output. Run Bayesys in NetBeans. Under tab Main, select Evaluate graph and then click on the first subprocess as shown below. Then hit the Run button found at the bottom of tab Main.

The above process will generate output information in the terminal window of NetBeans. Save the last three lines, as highlighted in the Fig below; you will need this information later when answering some of the questions in Section 3.

Additionally, the above process should have generated one PDF files in folder Input called DAGtrue.pdf. Save this file as you will need it for later.

This only concerns MAC/Linux users: The above process might return an error while creating the PDF file, due to compatibility issues. Even if the system completes the process without errors, the PDF files generated may be corrupted and not open on MAC/Linux. If this happens, you should use the online GraphViz editor to produce your graphs, available here: https://edotor.net/ , which converts text into a visual drawing. As an example, copy the code shown below in the web editor:

digraph {

Earthquake -> Alarm Burglar -> Alarm Alarm -> Call

}

If you are drawing a CPDAG containing undirected edges, then consider:

digraph {

Earthquake -> Alarm

Burglar -> Alarm

Alarm -> Call [arrowhead=none];

}

You can then edit the above code to be consistent with your DAGtrue.csv. You could copy-and-paste the variable relationships (e.g., Earthquake → Alarm) directly from DAGtrue.csv into the code editor, taking care to remove commas and quote any variable names containing spaces.

TASK 4: Perform structure learning

1. Run Bayesys. Under tab Main, select Structure learning and algorithm HC (default selection). Select Evaluate graph and then click on the last two (out of four) options so that you also generate the learned DAG and CPDAG in PDF files, in addition to the DAGlearned.csv file which is generated by default. Then, hit the Run button.

2. Once the above process completes, you should see:

i. Relevant text generated in the terminal window of NetBeans.

ii. The files DAGlearned.csv, DAGlearned.pdf and CPDAGlearned.pdf should be generated in folder Output. As stated in Task 3, the PDF files may be corrupted on MAC/Linux, and you will have to use the online GraphViz editor to produce the graph corresponding to DAGlearned.csv (simply copy the relationships from the CSV file into the editor as discussed in Task 3).

3. Repeat the above process for the other four algorithms; i.e., TABU, SaiyanH, MAHC and GES. Save the same output information and files that each algorithm produces (ensure you first read the NOTE below).

NOTE: As stated in the manual, Bayesys overwrites the output files every time it runs. You need to remember to either rename or move the output files to another folder before running the next algorithm.

Similarly, if you happen to have one of the output files open – for example, viewing the DAGlearned.pdf in Adobe Reader while running structure learning - Bayesys will fail to replace the PDF file, and the output file will not reflect the latest iteration. Ensure you close all output files before running structure learning.

3. Questions

This coursework involves applying five different structure learning algorithms to your data set. We do not expect you to have a detailed understanding of how the algorithms operate. None of the Questions focuses on the algorithms and hence, your answers should not focus on discussing differences between algorithms.

You should answer ALL questions.

You should answer the questions in your own words.

Do not exceed the maximum number of words specified for each question. If a question

restricts the answer to, say 100 words, only the first 100 words will be considered when

marking the answer.

Marking is out of 100.

QUESTION 1: Discuss the research area and the data set you have prepared, along with pointers to your data sources. Screen-capture part of the final version of your data set and present it here as a Figure. For example, if your data set contains 15 variables and 1,000 samples, you could present the first 10 columns and a small part of the sample size. Explain why you considered this data set to be suitable for structure learning, and what questions you expect a structure learning algorithm to answer.

Maximum number of words: 150 Marks: 10

QUESTION 2: Present your knowledge-based DAG (i.e., DAGtrue.pdf or the corresponding DAGtrue.csv graph visualised through the web editor), and briefly describe the information you have considered to produce this graph. For example, did you refer to the literature to obtain the necessary knowledge, or did you consider your own knowledge to be sufficient for this problem? If you referred to the literature to obtain additional information, provide references and very briefly describe the knowledge gained from each paper. If you did not refer to the literature, justify why you considered your own knowledge to be sufficient in determining the knowledge-based graph.

NOTE: It is possible to obtain maximum marks without referring to the literature, as long as you clearly justify why you considered your personal knowledge alone to be sufficient. Any references provided will not be counted towards the word limit.

Maximum number of words: 200 Marks: 10

QUESTION 3: Complete Table Q3 below with the results you have obtained by applying each of the algorithms to your data set during Task 4. Compare your CPDAG scores produced by F1, SHD and BSF with the corresponding CPDAG scores shown in Table 3.1 (page 13) in the Bayesys manual.

Specifically, are your scores mostly lower, similar, or higher compared to those shown in Table 3.1 in the manual? Why do you think this is? Is this the result you expected? Explain why.

Table Q3. The scores of the five algorithms when applied to your data set.

Algorithm

HC TABU SaiyanH MAHC GES

CPDAG scores Log-Likelihood

BIC # free score parameters

Structure learning elapsed time

BSF SHD

F1 (LL) score

QUESTION 4: Present the CPDAG generated by HC (i.e., CPDAGlearned.pdf or the corresponding CPDAGlearned.csv graph visualised through the web editor). Highlight the three causal classes in the CPDAG. You only need to highlight one example for each causal class. If a causal class is not present in the CPDAG, explain why this might be the case.

Maximum number of words: 200 Marks: 10

9

Maximum number of words: 250 Marks: 15



QUESTION 5: Rank the six algorithms by score, as determined by each of the three metrics specified in Table Q5. Are your rankings consistent with the rankings shown under the column “Rankings according to the Bayesys manual” in Table Q5 below? Is this the result you expected? Explain why.

Table Q5. Rankings of the algorithms based on your data set, versus ranking of the algorithms based on the results shown in Table 3.1 in the Bayesys manual.

Your rankings

Rankings according to the Bayesys manual

BSF [single score]

SHD [single score]

F1 [single score]

BSF [average score]

SHD [average score]

F1 [average score]

SaiyanH [0.559]

MAHC [50.96]

SaiyanH [0.628]

GES [0.506]

SaiyanH [57.98]

MAHC [0.579]

MAHC [0.503]

HC [62.36]

GES [0.552]

TABU [0.499]

TABU [62.63]

TABU [0.549]

HC [0.498]

GES [63.3]

HC [0.548]

Rank

1 2 3 4 5

QUESTION 6: Refer to your elapsed structure learning runtimes and compare them to the runtimes shown in Table 3.1 in the Bayesys manual. Indicate whether your results are consistent or not with the results shown in Table 3.1. Explain why.

Maximum number of words: 100 Marks: 10

10

Maximum number of words: 200 Marks: 10



QUESTION 7: Compare the BIC score, the Log-Likelihood (LL) score, and the number of free parameters generated in Task 3, against the same values produced by the five structure learning algorithms you used in Task 4. What do you understand from the difference between those three scores? Are these the results you expected? Explain why.

Table Q7. The BIC scores, Log-Likelihood (LL) scores, and number of free parameters generated by each of the five algorithms during Task 3 and Task 4.

Algorithm

Your Task 4 results

Algorithm

Your Task 5 results

BIC score

Log-Likelihood

Free parameters

BIC score

Log-Likelihood

Free parameters

Your knowledge-based graph

HC

TABU

SaiyanH

MAHC

GES

Maximum number of words: 200 Marks: 15

QUESTION 8: Select TWO knowledge approaches from those covered in Week 11 Lecture and Lab; i.e., any two of the following: a) Directed, b) Undirected, c) Forbidden, d) Temporal, e) Initial graph, f) Variables are relevant, and g) Target nodes. Apply each of the two approaches to the structure learning process of HC, separately (i.e. only use one knowledge approach at a time). It is up to you to decide how many constraints to specify for each approach. Then, complete Table Q8 and explain the differences in scores produced before and after incorporating knowledge. Are these the results you expected? Explain why.

Remember to clarify which two knowledge approaches you have selected from those listed between (a) and (g) above, and show in a separate/new table the constraints you have specified for each approach. These constraints must come from your knowledge graph you have produced in Task 3. Note that knowledge approach (f) does not require any constraints; but yes, you can still use this as one of your two selections.

Table Q8. The scores of HC applied to your data, with and without knowledge.

Knowledge approach

CPDAG scores Free Number

BSF SHD F1 LL BIC parameters ofedges Runtime

Maximum number of words: 300 Marks: 20

Without knowledge

With knowledge - List your 1st knowledge approach here With knowledge - List your 2nd knowledge approach here

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!