Toward a system for Bunks arrangement and assessment - PowerPoint PPT Presentation

toward a framework for cots classification and evaluation l.
Skip this Video
Loading SlideShow in 5 Seconds..
Toward a system for Bunks arrangement and assessment PowerPoint Presentation
Toward a system for Bunks arrangement and assessment

play fullscreen
1 / 57
Download Presentation
deo
Views
Download Presentation

Toward a system for Bunks arrangement and assessment

Presentation Transcript

  1. Toward a framework for COTS classification and evaluation Letizia Jaccheri Bari 19/12/01

  2. This talk • I present me, IDI, NTNU, Trondheim, Norway • Project goals • Context (education, research) • Brainstorming • COTS classification • COTS evaluation • Related work • Conclusions and further work Bari 19/12/01

  3. Letizia Jaccheri • Pisa 1965 • Master in CS Pisa 1988 • Programmer 1988-1989 • PhD in Software Engineering, Torino 1994 • Guest researcher in Trondheim from 1990 to 1992 • Assistant professor in Software Engineering Torino 1994-1997 • Førstamanuensis IDI/NTNU 1997 Bari 19/12/01

  4. Assistant from 1991/2 at Politecnico Programming, Introduction to CS, TO (PT) Designed and run new course in Software engineering 2 at PT Software quality and process improvement (NTNU) Software architecture (NTNU) Software development environment Software process modelling OO SPI Software engneering education Software architecture Empirical studies Human aspects in SE Teaching and research Bari 19/12/01

  5. IDI NTNU Bari 19/12/01

  6. Bari 19/12/01

  7. Bari 19/12/01

  8. Project Goals • Component out of the Shelf (COTS) • Learning/pedagogical goals • Research goals Bari 19/12/01

  9. A 5th year course, SE master students Best Norwegian students, 10% of applicants can start how to evaluate technology how to create value by help of new technology how to learn new technology Pedagogical context and goals Bari 19/12/01

  10. Papers about COTS Difficulties in defining what a COTS is Few examples Little empirical work Classify and evaluate existing COTS Develop research hypotheses A mean not an end Research context and goals Bari 19/12/01

  11. Motivation • The motivation for constructing taxonomy is to provide both researchers and practitioners a tool that allow communicating, generalizing conclusions, and applying research findings (Glass). • Which classes are empy? Why? • Being able to evaluate COTS, also by comparison • The final DB is not an end but a mean Bari 19/12/01

  12. Classification and evaluation framework • How to make classes: • A COTS classification schema • How to evaluate • a set of evaluation attributes Bari 19/12/01

  13. Brainstorming • how did (or would) you solve the problem of COTS classification and evaluation? • List 3 examples of COTS • List some classes of COTS (browsers, DBMS, …) • List 3 evaluation attributes (price, reliability, …) Bari 19/12/01

  14. The pedagogical context • Active Reading • Doing • Teaching • Connecting people Research Bari 19/12/01

  15. The process • Define classes, Propose COTS, propose evaluation attributes (Students 1. Assignment) • Choose classes, items, evaluation attributes, and scenarios (teacher and other researchers) • Evaluate COTS (Students 2. Assignment) Bari 19/12/01

  16. 1. Phase deliverables • Provide at least six attributes which you believe are significant to describe/evaluate COTS • Provide a list of COTS’s to be evaluated. Start from links Bari 19/12/01

  17. List of software technology • Java; www.java.sun.com • Microsoft; www.microsoft.com • Rational; www.rational.com • Argo UML; argouml.tigris.org • Ibm, www.ibm.com • Orion; www.orion.com • WAP Empreror (WAP utvikling); www.wapemperor.com • Paint Shop Pro 7 (tegning); www.jasc.com • Corba; www.corba.org, www.iona.com • Winwap; www.winwap.com • www.yospace.com • PDA www.palm.com • Perl www.perl.org • Macromedia, www.macromedia.com Bari 19/12/01

  18. List of software Technology (2) • HTML/XHTML/dynamic HTML XML; www.w3c.org • Opera www.opera.com • Lynx • Web TV SMIL www.w3c.org/AudioVideo/Activity • Amaya • Fetch! • Neoplanet • Mobile Access www.w3c.org/Mobile/Activity • Together ControlCentre www.togethersoft.com • Oracle www.oracle.com • Jasmine www.cai.com • Sybase www.sybase.com Bari 19/12/01

  19. Finding classes Our classification is based in three different axes: • architectural level • kind • phase Bari 19/12/01

  20. 3-tier Bari 19/12/01

  21. Example (Architectural level) • Client • Examplesare: HTML, Opera, Netscape, Internet Explorer, macromedia flash, java client compiler, etc.. • Server • Examples are: servlet, javac, JSP, Perl, PhP, Microsoft .net, Microsoft ASP, www.alltheweb.com, etc.. • Data Bari 19/12/01

  22. Example (kind) • executable statements either in source or binary form. • Examples are javac, macromedia flash, etc.. • Standards available and approved in some forum • Examples are the html specification as available at http://www.w3.org/MarkUp/ • service provided by other through some network • Examples are we-based version control systems such as www.sourceforge.org, etc.. Bari 19/12/01

  23. Phase • during development • Examples are: Macromedia Flash is used during development of client level components, • part of the final running system • Examples are Internet Explorer, mysql, etc.. Bari 19/12/01

  24. Evaluation Attributes • Acquisition cost Number • Ownership cost Number • Market size Number • Market Number • License type Nominal (site, personal, evaluation) Bari 19/12/01

  25. Generating classes • 3 x 3 x 2 =18 Bari 19/12/01

  26. Examples • HTML (client, standard, part) • Netscape (client, executable, part) • IE (client, executable, part) • Servlet (server, standard, part) • PhP (server, standard, part) • www.alltheweb.com (client, service, operation) • Ambiguities may arise. Bari 19/12/01

  27. Example, java Bari 19/12/01

  28. Assignment 2 • Thursday 27th September meeting • evaluation table • list of software items (derived from those provided by students) • students will be assigned to items and will evaluate them according to the common evaluation table • Delivery at October 15th • Results presented and discussed in a workshop, which will be held the 18th of October. Persons from industry participate to this workshop Bari 19/12/01

  29. Results from assignment 1 • Proposed items • 174, 555, min 4, max 36 Bari 19/12/01

  30. Frequency of classes of proposed items Bari 19/12/01

  31. Architecture •  3-tier • 2-tier • Standalone / Centralized • Blackboard • Peer to Peer • Client-Server Bari 19/12/01

  32. Kind • Executable: compiled, ready-to-run item. • Standard: describing a documented standard. • Service: which can be used from another item or directly by a user. • Source: source code. • Documentation: documentation for example describing other software items. Bari 19/12/01

  33. Phase  • development • compilation: items helping compilation. • execution • maintenance: change of modules, upgrades etc. • start-up: items for starting another software item (for example boot programs). • phase-out: items for ending the life cycle of another item. Bari 19/12/01

  34. Possibilities  • Resulting in 210 (7x5x6) distinctive classification possibilities???? Bari 19/12/01

  35. Context Items Attributes Scenarios Bari 19/12/01

  36. product maturity market share performance security/safety reliability hw requirements product support documentation usability learnability modifiability maintenability change frequency licence type acquisition cost sw requirements standard configuration domain specific Evaluation attributes Bari 19/12/01

  37. What do we expect students to do? • Reasoning about sw products with respect to 3 scenarios • Not so much installation and testing Bari 19/12/01

  38. My proposal • Architecture = • 3-tier (client, • server, • data) • Stand alone • Kind (executable, standard, service) • Phase (development, deployment/execution) • We are not interested in architecture of development tools Bari 19/12/01

  39. Bari 19/12/01

  40. Scenarioes for Technology Evaluation • www.amazon.com • http://www.idi.ntnu.no/emner/sif80at/ • Your current project if applicable, else a former project. Bari 19/12/01

  41. Amazon • Very many users. • Personalised, dynamic web pages. • Recommandations based on former purchases, clicked items in session. • Large number of products, product presentations. • On-line shopping (single-click), payment policies. • Email confirmations of purchasing, delivery. Bari 19/12/01

  42. SIF80AT • Few users, protective environment for input (password protected). • Provides information. • Normal users provides data/result input interactively through a GUI. • Normal users presents, change own data/results. • Authorised users perform analysis and statistics based on the delivered data/results from normal users. Bari 19/12/01

  43. Own project • If your project is technology-based, evaluate the list software items to be applied in your project. • Else select a former technology project to be evaluated. Bari 19/12/01

  44. What and how to measure • product maturity: write a story • Market share: is a percentage, ratio • Performance: number of users it scales to • Security/safety: write a story Bari 19/12/01

  45. Cont. • Reliability: write a story, try to mention faults • hw requirements: list • product support: write a summary • sw requirements: list • standard configuration: ? • domain specific: if yes, list domain Bari 19/12/01

  46. Cont. • Documentation: kind (web, on line, etc.), number of manual pages • Usability: degree of satisfaction on a scale from 1 to 5 and write a story Bari 19/12/01

  47. Cont. • Learnability: how much time does it take to learn to use this component in this scenario? Use historical data, or literature. Discuss relationship to the different scenarios. • Modifiability: how easy is it to modify this item? Not modifiable, parametrizable, provide interface Bari 19/12/01

  48. Cont. • Maintenability: this does not apply to third part software • change frequency: how many releases/versions in the last year? • licence types: list • acquisition cost: we will have to distinguish between licence types Bari 19/12/01

  49. Results so far • Why is this class empty?” A class may be empty because existing COTS have been forgotten or because there do not exist COTS with those characteristics on the market. If such COTS do not exist, an obvious reason may be that that class does not make sense. A more interesting reason may be that COTS like that have not been yet developed, and this may open for new research. One can for example ask “Is the class of tools for development of server level standard not interesting?” Bari 19/12/01

  50. Cont. • Other research questions may be generated by looking at the existing or not existing relationships between classes of COTS. Like, “are there any relationships between the COTS for client standard development and the COTS for server standard development?”. Bari 19/12/01