首页 > > 详细

IFB104留学生作业代做、代写IT'Systems作业、代写Python实验作业、Python程序语言作业调试 代做R语言编程|调试Matlab程序

IFB104'Building'IT'Systems
Semester'2,'2019
Assignment 2, Part A: News Feed Aggregator
(21%, due 11:59pm Sunday, October 20th, end of Week 12)
Overview
This is the first part of a two-part assignment. This part is worth 21% of your final grade for
IFB104. Part B will be worth a further 4%. Part B is intended as a last-minute extension to
the assignment, thereby testing the maintainability of your solution to Part A and your ability
to work under time pressure. The instructions for completing Part B will not be released until
Week 12. Whether or not you complete Part B you will submit only one solution, and receive
only one mark, for the whole 25% assignment.
This a complex and challenging assignment. If you are unable to solve the whole problem,
submit whichever parts you can get working. You will receive partial marks for incomplete
solutions.
Motivation
The way we consume news has changed dramatically in recent years. The days of morning
and afternoon home newspaper deliveries are long gone. Where readers were once restricted
to a handful of local news sources, we now have a bewildering range of online options from
around the world. Most newspapers, radio and television stations now make their news services
available online, in addition to new purely online news services. Making sense of this
cacophony is a challenge.
One response is news aggregation services. These allow readers to create their own news
channels by mixing their preferred news sources together into a single stream. In this assignment
you will create your own news aggregation application in Python. Your program
will have a Graphical User Interface that allows its user to select how many stories they want
to see from each source and then export an HTML document containing the selected stories.
This document can be examined in a standard web browser or printed as a hardcopy.
This “capstone” assignment is designed to incorporate all of the concepts taught in IFB104.
To complete it you will need to: (a) use Tkinter to create an interactive Graphical User Interface;
(b) download web documents using a Python script and use pattern matching to extract
specific elements from them; and (c) generate an HTML document that integrates the extracted
elements, presenting them in an attractive, easy-to-read format.
Goal
Your aim in this assignment is to develop an interactive “app” which allows its users to select
how many news stories they want to see from several different news sources. There must be
at least four different sources, two of them “live” news feeds and two “archives” of previously-downloaded
news items. Most importantly, the two online web documents from which
you get your “live” news must be ones that are updated on a continuous basis (at least daily
but preferably much more often) so your program needs to be resilient to changes in the
source documents. These two news sources must also come from different web sites (i.e.,
different web servers), to allow for one of the sites being temporarily offline.
IFB104'Building'IT'Systems
Semester'2,'2019
For the purposes of this assignment you have a free choice of which news sources to use,
provided there are always at least ten stories in each one, the stories are updated frequently,
and the information available for each story includes a headline, the date/time of publication,
a photo, and a short textual description. Appendix A below lists many “RSS Feed” web sites
which should be suitable for this assignment, but you are encouraged to find your own of personal
interest.
Using these news sources you are required to build an IT system with the following general
architecture.
Your application will be a Python program with a Graphical User Interface. Under the user’s
control, it allows news feeds to be previewed in the GUI, from both online and archived news
sources. When the user is happy with their selections they can then export the selected stories
as an HTML document. This document will contain full detail of each story and can be
studied by the user in any standard web browser.
This is a large and complex project, so its design allows it to be completed in distinct stages.
You should aim to build the system incrementally, rather than trying to solve the whole problem
at once. A suggested development sequence is:
1. Develop code that allows the static, archived news stories to be previewed in the GUI.
2. Extend your solution so that it allows “live” news stories to be previewed in the GUI.
3. Extend your solution further so that the user’s selected stories can be exported as an
HTML document.
If you can’t complete the whole assignment submit whatever parts you can get working. You
will get partial marks for incomplete solutions (see the marking guide below).
Illustrative example
To demonstrate the idea, below we describe our own news aggregation application, which
uses information extracted from four different news sites. Our demonstration solution allows
IFB104'Building'IT'Systems
Semester'2,'2019
users to select stories from two archived news feeds, The Queensland Times as it appeared on
September 8th 2019 and the Crikey satirical news service from August 29th. It also allows
two “live” news sources to be seen, the Canberra Times and FOX News Entertainment. Both
of these sources are directly accessed by our Python application, so the news displayed is alway
the latest available. The application allows users to select how many stories they wish to
see from each source. The program then accesses all the necessary information from the online
and archived web sites and displays a preview of the top stories from each in the GUI.
The user can then export a personalised news feed which includes stories from all selected
sources as an HTML document which can be viewed in any standard web browser.
The screenshot below shows our example solution’s GUI when first started. We’ve called it
The ‘Smooth Blend’ News Mixer and have included a suitably evocative image of someone
reading news from an RSS feed, but you should choose your own name and GUI design.
The GUI has four ‘spin box’ widgets allowing the user to select how many stories they want
to see from each source, a scrollable text area for displaying previews of the selected stories,
and a push button for exporting the selections. You do not need to copy our example and are
encouraged to design your own GUI with equivalent functionality. For instance, pull-down
menus or text entry boxes could be used for making the selections rather than spin boxes.
Selecting archived stories
When the user chooses a number of stories from the two archived sources, the application
extracts headlines and publication dates for each story from local files, previously down-
IFB104'Building'IT'Systems
Semester'2,'2019
loaded from the web. For instance, in the screenshot below the user has chosen to see two
stories from our archived copy of the Queensland Times and one from the Crikey archive file.
Accordingly, our application extracts the top two stories from the Canberra Times file and
the top story from Crikey. It displays each story’s headline, source and publication date in
the preview pane. (We downloaded a copy of the Crikey site at 8:46am on August 29th but,
as can be seen above, the most recent story on the site at that time was from the previous evening,
with an announcement about Boris Johnson’s latest Brexit move.)
Exporting the selected stories
Happy with their selections, the user then presses the “Export” button. This causes our application
to generate an HTML file called news.html (in the same folder as the Python program).
This document contains copies of the same stories previewed in the GUI, plus additional
detail including a photo and a short story summary.
When opened in a standard web browser this file appears as shown overleaf. It includes a
heading identifying the document and a “splash” image. Following this are the selected stories,
each consisting of a headline, photo, short description, identification of the story’s source,
and the publication date. At the end of the file are four hyperlinks to the original web sites
from which both the “live” and “archived” data is/was sourced.
IFB104'Building'IT'Systems
Semester'2,'2019
IFB104'Building'IT'Systems
Semester'2,'2019
IFB104'Building'IT'Systems
Semester'2,'2019
The document is well presented and the various elements of each story all match one another.
Importantly, all the images in the exported document, including the “splash” image up the top
and the story photos, are online images, not ones stored on our local computer. To ensure
that the exported web document is portable and can be viewed on any computer, the photos
are all links to online images using appropriate HTML “img” tags and URLs.
You are not required to follow the details of our demonstration GUI or exported HTML
document. You are strongly encouraged to use your own skills and initiative to design your
own solution, provided it has all the functionality and features described herein.
Selecting current stories
Tiring of reading “old news”, our user next selects some “live” stories in the GUI as shown
below, as well as changing the choice of archived stories in the mix.
As can be seen in the preview pane, our program updates the archived stories selected. More
importantly, however, scrolling through the headlines reveals current stories downloaded
“live” from the Internet. To do so our program downloaded copies of the source web pages
and extracted appropriate elements from them. When we ran the program on September 10th
the top three stories in the Canberra Times were as shown below.
IFB104'Building'IT'Systems
Semester'2,'2019
Our user also selected four stories from FOX News Entertainment. Scrolling down in the
preview pane reveals these stories as well.
Notice in the screenshots above that the publication dates for the stories from different sources
are in different formats. This is how the dates were represented in the source web documents.
There is no need to try to normalise or unify the date/time formats; they should simply
be reproduced exactly as they appeared in the original documents.
Exporting the selected stories (again)
At this point our user again presses the “Export” button, causing all ten selected stories to be
written to the news.html file. When opened in a web browser the chosen news feed mix
appears as shown in the following extracts in this case. Here we have a unique mixture of
both old and current news, and Australian and overseas news, but all of the stories are clearly
labelled with their source and publication date, so there is no confusion. As usual, all the images
in the document are links to online files. (And, no, we don’t understand the second Crikey
story either, but that’s what they had on their web site at the time we downloaded their
news feed!)
IFB104'Building'IT'Systems
Semester'2,'2019
IFB104'Building'IT'Systems
Semester'2,'2019
IFB104'Building'IT'Systems
Semester'2,'2019
IFB104'Building'IT'Systems
Semester'2,'2019
Extracting the HTML elements
To produce the news story details for displaying in the GUI and exporting as part of our
HTML document, our application used regular expressions to extract elements from the relevant
source web documents, whether they were stored in the static archive or are downloaded
from the Internet whenever the program runs.
A significant challenge for this assignment is that web servers deliver different HTML/XML
documents to different web browsers or other software clients. This means the web document
you see in a browser may be different from the web page downloaded by your Python
application. For this reason, to create your “archived” files you should download the web
documents using our downloader program (see below), or a similar application. This will
ensure that the “live” and “archived” documents have similar formats, thus making your pattern
matching task easier.
To produce our demonstration solution, we first downloaded copies of the web pages and
then studied their source code to identify text patterns that would help us find the specific
elements we wanted. For instance, the XML source code for the FOX Entertainment news
feed was as follows on September 10th.
IFB104'Building'IT'Systems
Semester'2,'2019
Looking closely at this code we can, for instance, see the various elements of the top story,
concerning the Brady Bunch’s house renovation, including the headline (in tags),<br>the story summary (in <description> tags), the publication date (in <pubDate> tags),<br>and the photo (as a JPG image URL, one of several alternatives). This knowledge was enough<br>to allow us to create regular expressions which extract all the necessary elements for all<br>strories in the document using Python’s findall function.<br>Sometimes it’s easier to use other Python features as well as, or instead of, regular expressions<br>to help extract the data. For instance, we found that our regular expression for extracting<br>headlines from the FOX Entertainment web site also matched the two “FOX News” titles<br>at the top of the web page in addition to the headlines we wanted. Rather than complicating<br>our regular expression, we therefore simply deleted the first two items returned by findall<br>each time we extracted headlines from this page. (We also found that the URLs for the<br>photos had inconsistent formats in this site, making them difficult to extract, so we don’t recommend<br>using this site in your own solution.) Most importantly, you must extract elements<br>in a general way that will still work when the contents of the source web page are updated.<br>Obviously working with such complex code is challenging. You should begin with your<br>static, “archived” documents to get some practice at pattern matching before trying the dynamically<br>changeable web documents downloaded from online.<br>Care was also taken to ensure that no HTML/XML tags or other HTML entities appeared in<br>the extracted text when displayed in either the GUI or the exported HTML document. In<br>some cases it was necessary to delete or replace such mark-ups in the text after it was extracted<br>from the original web document. The information seen by the user must not contain<br>any extraneous tags or unusual characters that would interfere with the appearance of the<br>news stories either in the GUI or the exported document.<br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>Exporting the HTML document<br>Our program creates the exported HTML document by writing code into a text file, integrating<br>the various elements extracted from the news feeds. Two segments of the HTML code<br>generated by our Python program are shown below. Although not intended for human consumption,<br>the HTML code is nonetheless laid out neatly, and with comments indicating the<br>purpose of each part. Your HTML code must also be well presented to facilitate future<br>maintenance of your application.<br>Robustness<br>Another important aspect of your solutin is that it must be resilient to error. The biggest risk<br>with this kind of program is problems accessing the source web sites. We have attempted to<br>make our download function as robust as possible. In particular, if it detects an error while<br>downloading a web document it returns the special value None instead of a character string,<br>so your program should allow for this. (We don’t claim that the download function is infallible,<br>however, because the results it produces are dependent on the behaviour of your specific<br>Internet connection. For instance, some systems will generate a default web document<br>when an online site can’t be reached, in which case the download function will be unaware<br>that a failure has occurred and won’t return None.)<br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>For instance, in our demonstration solution the GUI alerts the user to a failure to download a<br>web site as follows.<br>Therefore, as insurance against the risk of a web site failing completely, your program’s two<br>“live” web sources must come from different web servers. One way of achieving this is to<br>ensure that the part of the address at the beginning of each site’sURL is entirely distinct. For<br>example, our sample solution used two totally different sources for the “live” news feeds, the<br>Canberra Times and FOX Entertainment. These two sites have the following URLs and<br>clearly come from different web servers.<br>(Since they never change, there is no need to use distinct servers for the two “archived”<br>documents. Nonetheless, we did so in our sample solution to make the program more interesting.)<br>Specific requirements and marking guide<br>To complete this part of the assignment you are required to produce an application in Python<br>3 with features equivalent to those above, using the provided news_aggregator.py<br>template file as your starting point. In addition you must provide the two (or more) previously-downloaded<br>web documents that serve as your archive of “old news” and one or more<br>image files needed to support your GUI. (However, all of the images in the exported HTML<br>file must be online images and must not be included in your submission.)<br>Your complete solution must support at least the following features.<br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>• An intuitive Graphical User Interface (4%). Your application must provide an attractive,<br>easy-to-use GUI which has all the features needed for the user to choose how<br>many news stories they want from each of four news feeds (two “archived” and two<br>“live”), preview the headlines for their selections, and export the complete stories as a<br>web document. You have a free choice of which Tkinter widgets to use to do the job,<br>as long as they are effective and clear for the user. This interface must have the following<br>features:<br>o An image which acts as a “logo” to identify your application. The image file<br>should be included in the same folder as your Python application.<br>o Your GUI must name your application in both the Tkinter window’s title and<br>as a large heading in the displayed interface. Inside the window the name may<br>appear as an integrated part of the logo, or as a separate textual label (as in our<br>demonstration solution).<br>o One or more widgets that allow the user to select how many stories they want<br>to see from each of four news feeds (two “archived” and two “live”).<br>o One or more widgets that allow the user to see details of the stories selected<br>(headlines, sources and publication dates).<br>o One or more widgets that allow the user to choose whether or not to export<br>their story selections as an HTML document.<br>Note that this criterion concerns the front-end user interface only, not the back-end<br>functionality. Functionality is assessed in the following criteria.<br>• Previewing archived news stories in the GUI (4%). Your GUI must be capable of<br>displaying the top stories, in the quantities selected by the user, from each of two distinct<br>sources of “archived” news, allowing selection of up to ten stories per source.<br>For each story the GUI must display<br>o the headline,<br>o the news source (usually the name of a newspaper, magazine, TV or radio station),<br>and<br>o the publication date/time for the story.<br>The necessary elements must be extracted from HTML/XML files previously downloaded<br>and stored along with your Python program. The documents must be stored in<br>exactly the form they were downloaded from the web server; they cannot be edited or<br>modified in any way. Pattern matching must be used to extract the relevant elements<br>from the documents so that the code would still work if the archived documents were<br>replaced with others in the same format. To keep the size of your solution manageable<br>only single HTML/XML source files can be stored. No image or style files may<br>be stored in your “archive”.<br>• Previewing “live”news stories in the GUI (4%). Your GUI must be capable of displaying<br>the top stories, in the quantities selected by the user, from each of two distinct<br>sources of “live” news, allowing selection of up to ten stories per source. For each<br>story the GUI must display<br>o the headline,<br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>o the news source (usually the name of a newspaper, magazine, TV or radio station),<br>and<br>o the publication date/time for the story.<br>The necessary elements must be extracted from HTML/XML files directly downloaded<br>from the web while your Python program is running. Pattern matching must<br>be used to extract the relevant elements from the documents so that the code still<br>works even after the online documents are updated. The chosen source web sites<br>must be ones that are updated on a regular basis, at least daily and preferably hourly.<br>The two source web sites must come from different web servers (as insurance against<br>one of the web sites being offline when your assignment is assessed).<br>• Exporting selected news stories as an HTML document (5%). Your program must<br>be able to generate an HTML document containing full details of the top stories from<br>each of the four news sources, live and/or archived, in the quantities selected by the<br>user. The resulting “mixed news feed” must be written as an HTML document in the<br>same folder as your Python program and must be easy to identify through an appropriate<br>choice of file name, “news.html”. The generated file must contain HTML<br>markups that make its contents easily readable in any standard web browser, and it<br>must be self-contained (i.e., not dependent on any other local files), although it may<br>reference online images and style files. When viewed in a browser, the displayed<br>document must be neat and well-presented and must contain at least the following<br>features:<br>o A heading identifying your application.<br>o A “splash” image characterising your application, downloaded from online<br>when the generated HTML document is viewed (i.e., not from a local file on<br>the host computer).<br>o Details of each of the news stories selected by the user in the GUI. For each<br>story at least the following information must be displayed:<br> The headline.<br> A photograph or image illustrating the story.<br> A short story summary or description.<br> The identity of the original news feed (typically a newspaper, magazine,<br>TV or radio station).<br> The date/time at which the story was published. (There is no need to<br>standardise the format of this timestamp. It can appear in exactly the<br>same format as the source web document. There is also no need to sort<br>the stories from different sources into chronological order, although<br>this would be a helpful feature.)<br>All of this information must be extracted via pattern matching from HTML<br>documents downloaded from the web. Most importantly, each of these sets of<br>items must all belong together, e.g., you can’t have the headline of one story<br>paired with a photo from another story. Each of the elements must be extracted<br>from the original document(s) separately and used to construct your<br>own HTML document.<br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>o Hyperlinks to the original four web sites from which the information was extracted,<br>both live and archived. (For the live feeds this will help the markers<br>compare the current web pages with your extracted information).<br>When viewed in a web browser the exported document must be neatly laid out and<br>appear well-presented regardless of the browser window’s dimensions. The textual<br>parts extracted from the original documents must not contain any visible HTML tags<br>or entities or any other spurious characters. The images must all be links to images<br>found online, not in local files, must be of a size compatible with the rest of the<br>document, and their original aspect ratio must be preserved (i.e., they should not be<br>stretched in just one direction).<br>• Good Python and HTML code quality and presentation (4%). Both your Python<br>program code and the generated HTML code must be presented in a professional<br>manner. See the coding guidelines in the IFB104 Code Presentation Guide (on<br>Blackboard under Assessment) for suggestions on how to achieve this for Python. In<br>particular, each significant Python or HTML code segment must be clearly commented<br>to say what it does, e.g., “Extract the link to the photo”, “Display the story’s<br>publication date”, etc.<br>• Extra feature (4%). Part B of this assignment will require you to make a ‘last-minute<br>extension’ to your solution. The instructions for Part B will not be released until just<br>before the final deadline for Assignment 2.<br>You can add other features if you wish, as long as you meet these basic requirements. You<br>must complete the task using only basic Python 3 features and the modules already imported<br>into the provided template. You may not use any Python modules that need to be downloaded<br>and installed separately, such as “Beautiful Soup” or “Pillow”. Only modules<br>that are part of a standard Python 3 installation may be used.<br>However, your solution is not required to follow precisely our example shown above. Instead<br>you are strongly encouraged to be creative in your choices of web sites, the design of your<br>Graphical User Interface, and the design of your generated HTML document.<br>Support tools<br>To get started on this task you need to download various web documents of your choice and<br>work out how to extract the necessary elements for displaying data in the GUI and generating<br>the HTML output file. You also need to allow for the fact that the contents of the web documents<br>from which you get your data will change regularly, so you cannot hardwire the locations<br>of the elements into your program. Instead you must use Python’s string find method<br>and/or regular expression findall function to extract the necessary elements, no matter<br>where they appear in the HTML/XML source code.<br>To help you develop your solution, we have included two small Python programs with these<br>instructions.<br>1. downloader is a Python program containing a function called download that<br>downloads and saves the source code of a web document as a text file, as well as returning<br>the document’s contents to the caller as a character string. A copy of this<br>function also appears in the provided program template. You can use it both to save<br>copies of your chosen web documents for storage in your “archive”, as well as to <br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>download “live” web documents in your Python application at run time. Although<br>recommended, you are not required to use this function in your solution, if you prefer<br>to write your own “downloading” code to do the job.<br>2. regex_tester is an interactive program introduced in the lectures and workshops<br>which makes it easy to experiment with different regular expressions on small text<br>segments. You can use this together with downloaded text from the web to help perfect<br>your regular expressions. (There are also many online tools that do the same job<br>you can use instead.)<br>Portability<br>An important aspect of software development is to ensure that your solution will work correctly<br>on all computing platforms (or at least as many as possible). For this reason you must<br>complete the assignment using standard Python 3 functions and modules only. You may not<br>import any additional modules or files into your program other than those already imported<br>by the given template file. In particular, you may not use any Python modules that need to<br>be downloaded and installed separately, such as “Beautiful Soup” or “Pillow”. Only<br>modules that are part of a standard Python 3 installation may be used.<br>Security warning and plagiarism notice<br>This is an individual assessment item. All files submitted will be subjected to software plagiarism<br>analysis using the MoSS system (http://theory.stanford.edu/~aiken/moss/). Serious<br>violations of the university’s policies regarding plagiarism will be forwarded to the Science<br>and Engineering Faculty’s Academic Misconduct Committee for formal prosecution.<br>As per QUT rules, you are not permitted to copy or share solutions to individual assessment<br>items. In serious plagiarism cases SEF’s Academic Misconduct Committee prosecutes both<br>the copier and the original author equally. It is your responsibility to keep your solution secure.<br>In particular, you must not make your solution visible online via cloud-based code<br>development platforms such as GitHub. Note that free accounts for such platforms are<br>usually public. If you wish to use such a resource, do so only if you are certain you have a<br>private repository that cannot be seen by anyone else. For instance, university students can<br>apply for a free private repository in GitHub to keep their assignments secure<br>(https://education.github.com/pack). However, we recommend that the best way to avoid being<br>prosecuted for plagiarism is to keep your work well away from the Internet!<br>Internet ethics: Responsible scraping<br>The process of automatically extracting data from web documents is sometimes called<br>“scraping”. However, in order to protect their intellectual property, and their computational<br>resources, owners of some web sites may not want their data exploited in this way. They will<br>therefore deny access to their web documents by anything other than recognised web<br>browsers such as Firefox, Internet Explorer, etc. Typically in this situation the web server<br>will return a short “Access Denied” document to your Python script instead of the expected<br>web document (Appendix B).<br>In this situation it’s possible to trick the web server into delivering you the desired document<br>by having your Python script impersonate a standard web browser. To do this you need to <br>IFB104'Building'IT'Systems<br>Semester'2,'2019<br>change the “user agent” identity enclosed in the request sent to the web server. The provided<br>download function has an option that disguises its true identitity. We leave it to your own<br>conscience whether or not you wish to activate this feature, but note that this assignment can<br>be completed successfully without resorting to such subterfuge.<br>Deliverables<br>You should develop your solution by completing and submitting the provided Python template<br>file news_aggregator.py. Submit this in a “zip” archive containing all the files<br>needed to support your application as follows:<br>1. Your news_aggregator.py solution. Make sure you have completed the statement<br>at the beginning of the Python file to confirm that this is your own individual<br>work by inserting your name and student number in the places indicated. Submissions<br>without a completed statement will be assumed not to be your own work.<br>2. One or more small image files needed to support your GUI interface, but no other<br>image files.<br>3. The previously-downloaded web documents used as your static “archive” of old news<br>stories. Only HTML/XML source code files may be included. No image or style<br>files associated with the web documents may be included. All images or styles<br>needed to support your exported HTML document must be sourced from online when<br>it is viewed in a web browser.<br>Once you have completed your solution and have zipped up these items submit them to<br>Blackboard as a single file. Submit your solution compressed as a “zip” archive. Do not<br>use other compression formats such as “rar” or “7z”.<br>Apart from working correctly your Python and HTML code must be well-presented and easy<br>to understand, thanks to (sparse) commenting that explains the purpose of significant elements<br>and helpful choices of variable, parameter and function names. Professional presentation<br>of your code wi</span> </div> </div> <div class="width30bi divfr"> <div class="width99bi margintop20 divbdr divfl"> <div class="divtitle"> <div class="divfl divtitlefont" style="text-align: left"> 联系我们</div> <div class="divfr"> </div> </div> <div> <ul> <li class="divullititle heightline25px divtal">QQ:99515681 </li> <li class="divullititle heightline25px divtal">邮箱:99515681@qq.com </li> <li class="divullititle heightline25px divtal">工作时间:8:00-23:00 </li> <li class="divullititle heightline25px divtal">微信:codinghelp</li> </ul> </div> </div> <div class="width99bi margintop20 divbdr divfl"> <div class="divtitle"> <div class="divfl divtitlefont" style="text-align: left"> 热点文章</div> <div class="divfr"> <img src="/image/j01.jpg" width="14" height="14" alt="程序代写更多图片" /></div> <div class="divfr"> <a href="Lists-0-1.html" id="infotop2_amore" title="程序代写周排行更多">更多</a></div> </div> <div> <ul> <li class="divullititle heightline25px divtal"><a href="2019101896084770.html" title="95-712留学生作业代做、代写Java编程设计作业、代做polymorphism课程作业、Java实验作业代写帮做R语言编程|代做SPSS" target="_blank"> 95-712留学生作业代做、代写java编程设计作业、代做polymorph </a> <span class="colorlan"> 2019-10-18</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101896084750.html" title="代做OLS留学生作业、代写R程序语言作业、代做R编程设计作业、代写linear predictor作业帮做Java程序|帮做C/C++编程" target="_blank"> 代做ols留学生作业、代写r程序语言作业、代做r编程设计作业、代写linea </a> <span class="colorlan"> 2019-10-18</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101896084729.html" title="SDGB 7844作业代做、R编程设计作业代做、代写Markdown留学生作业、代写R课程设计作业代做Java程序|代做R语言编程" target="_blank"> Sdgb 7844作业代做、R编程设计作业代做、代写markdown留学生作 </a> <span class="colorlan"> 2019-10-18</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101896084707.html" title="代写MATH4/68091作业、代做Statistical Computing作业、R程序语言作业调试、代写R实验作业代写R语言编程|代写R语言程序" target="_blank"> 代写math4/68091作业、代做statistical Computin </a> <span class="colorlan"> 2019-10-18</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101896084685.html" title="Analytics留学生作业代做、代写Data Visualisation作业、代写Java,c++编程设计作业、Python语言作业代做调试C/C++编程|" target="_blank"> Analytics留学生作业代做、代写data Visualisation作 </a> <span class="colorlan"> 2019-10-18</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101896084654.html" title="代写FRT留学生作业、代写R语言作业、代做datasets课程作业、R编程设计作业调试帮做SPSS|代做Java程序" target="_blank"> 代写frt留学生作业、代写r语言作业、代做datasets课程作业、R编程设 </a> <span class="colorlan"> 2019-10-18</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317168.html" title="代做STA 442课程作业、代写effects models作业、Python,Java,c/c++程序设计作业代做代写Database|代写R语言程序" target="_blank"> 代做sta 442课程作业、代写effects Models作业、Pytho </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317145.html" title="SE 3316A作业代做、代写Web Technologies作业、Java课程作业代做、Java程序语言作业调试代写Web开发|调试Matlab程序" target="_blank"> Se 3316A作业代做、代写web Technologies作业、Java </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317123.html" title="代写CSI213留学生作业、代做Data Structures作业、代写Java程序设计作业、Java语言作业代做代写Python编程|帮做Java程序" target="_blank"> 代写csi213留学生作业、代做data Structures作业、代写ja </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317100.html" title="CSE 325作业代写、C/C++编程设计作业调试、program留学生作业代做、C/C++实验作业代写代做Python程序|代做留学生Processing" target="_blank"> Cse 325作业代写、C/C++编程设计作业调试、Program留学生作业 </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317076.html" title="代做FIT2014留学生作业、代做C++课程设计作业、Information Technology作业代写、c/c++程序语言作业调试代写留学生Prolog|" target="_blank"> 代做fit2014留学生作业、代做c++课程设计作业、Information </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317053.html" title="SCM 460课程作业代做、代写R编程设计作业、代做R实验作业、代写dataset留学生作业帮做Haskell程序|代写Web开发" target="_blank"> Scm 460课程作业代做、代写r编程设计作业、代做r实验作业、代写data </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317029.html" title="代写RAD留学生作业、代做R, Matlab/C++编程语言作业、代写R课程设计作业代写R语言程序|代写R语言程序" target="_blank"> 代写rad留学生作业、代做r, Matlab/C++编程语言作业、代写r课程 </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790317006.html" title="COMP5338作业代写、SQL程序语言作业调试、代写Schema Design作业、代做SQL实验作业代写Web开发|帮做R语言编程" target="_blank"> Comp5338作业代写、Sql程序语言作业调试、代写schema Desi </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101790316962.html" title="代做159.20留学生作业、代写RLE课程作业、Java编程设计作业调试、Python,c/c++实验作业代做代做留学生Processing|帮做Java程序" target="_blank"> 代做159.20留学生作业、代写rle课程作业、Java编程设计作业调试、P </a> <span class="colorlan"> 2019-10-17</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101682222542.html" title="代做GU4206/GR5206作业、代写R程序语言作业、代写data留学生作业、代做R课程设计作业代做留学生Prolog|代做R语言编程" target="_blank"> 代做gu4206/Gr5206作业、代写r程序语言作业、代写data留学生作 </a> <span class="colorlan"> 2019-10-16</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101682222435.html" title="FIT2104留学生作业代做、代写SQL语言作业、SQL编程语言作业调试、代做database课程作业代做SPSS|调试Matlab程序" target="_blank"> Fit2104留学生作业代做、代写sql语言作业、Sql编程语言作业调试、代 </a> <span class="colorlan"> 2019-10-16</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101573191903.html" title="代写glFrustum课程作业、代做Python程序语言作业、代写Java,c/c++课程设计作业代做R语言程序|代写R语言编程" target="_blank"> 代写glfrustum课程作业、代做python程序语言作业、代写java, </a> <span class="colorlan"> 2019-10-15</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101573191881.html" title="代写Software留学生作业、代做Information Technology作业、c/c++编程语言作业调试、代做C++实验作业代写留学生 Statist" target="_blank"> 代写software留学生作业、代做information Technolo </a> <span class="colorlan"> 2019-10-15</span> </li> <li class="divullititle heightline25px divtal"><a href="2019101573191857.html" title="31927留学生作业代做、NET Applications作业代写、C++程序设计作业调试、c/c++课程设计作业代做帮做Haskell程序|代做SPSS" target="_blank"> 31927留学生作业代做、Net Applications作业代写、C++程 </a> <span class="colorlan"> 2019-10-15</span> </li> </ul> </div> </div> <br /> </div> <div class="divfloatclear"> </div> <div class="bottomdiv"> <div class="width1000px divmargin0auto paddingtop20"> <div class="height30px divtal"> <a href="#" title="代写程序联系我们">联系我们</a> - QQ: 99515681 微信:codinghelp </div> <div class="height30px divtal"> © 2014 <a href="#" target="_blank" title="程序代写网技术分享">www.7daixie.com</a> <span style="display:none"> <script type="text/javascript"> var cnzz_protocol = (("https:" == document.location.protocol) ? " https://" : " http://"); document.write(unescape("%3Cspan id='cnzz_stat_icon_1273658652'%3E%3C/span%3E%3Cscript src='" + cnzz_protocol + "s13.cnzz.com/z_stat.php%3Fid%3D1273658652%26show%3Dpic1' type='text/javascript'%3E%3C/script%3E"));</script> </span> </div> <div class="divtal"> <span class="colorlan">程序代写网!</span> </div> <div class="paddingtop20"> </div> </div> </div> <style type="text/css"> .keifu { position: fixed; top: 30%; right: 0; width: 151px; _position: absolute; _top: expressiondocument.documentElement.scrollTop+document.documentElement.clientHeight-this.offsetHeight-(parseInt(this.currentStyle.bottom,10)||0)-(parseInt(this.currentStyle.marginTop,10)||0)-(parseInt(this.currentStyle.marginBottom,10)||0))); z-index: 100; } </style> <script src="http://www.asgnhelp.com/js/rightfloat.js"></script> <div class="keifu"> </div> </div> </form> <style type="text/css"> .keifu { position: fixed; top: 30%; right: 0; width: 151px; _position: absolute; _top: expressiondocument.documentElement.scrollTop+document.documentElement.clientHeight-this.offsetHeight-(parseInt(this.currentStyle.bottom,10)||0)-(parseInt(this.currentStyle.marginTop,10)||0)-(parseInt(this.currentStyle.marginBottom,10)||0))); z-index: 100; } </style> <script src="http://www.asgnhelp.com/js/rightfloat.js"></script> <div class="keifu"> </div> <script language="Javascript"> document.oncontextmenu=new Function("event.returnValue=false"); document.onselectstart=new Function("event.returnValue=false"); </script> </body> </html>