Saturday, December 27, 2014

Fill Check In form with quality metadata with just One Click (with this new Check In Form Assistant)

One of the very neat features of our new Oracle Desktop - Style integration for Adobe Acrobat is the Form Assistant. This is one of these 'little extras' that take user productivity to an entirely new level. Why? Because it

Allows you to fill in your check in form with quality metadata .... in just one click!

Seriously. It lest you supply full metadata values and place documents in proper WebCenter Folder with just one click because you could save different sets of metadata values for different types of documents you frequently work with. For instance, when you check in invoices from a vendor XYZ, you fill in vendor name of XYZ, a Vendor Type of X, a Document Type of Invoice and so on. The only field that is really different from an invoice to invoice is the title. The Form Assistant now allows you to save 'XYZ Invoice' as your personal Saved Metadata Value Set and apply it with one click (See screenshot below)



Form Assistant also gives you these other quick ways to fill in your Check In Form with quality metadata values:

  • Apply Folder Metadata - uses Content Server Folders metadata. Allows you to apply all non-empty values from a Content Server Folder or drag and drop individual values right into the form!
  • Fill With Recent Entries - Yes! Each and every time an item is checked in, a full set of values you used is automatically saved in your personal log of recent form entries and the entire form can be recalled in one click.
  • Use PDF Metadata From Document - All of the extended Windows and PDF metadata values defined in the document itself are now available for drag and drop right into the fields of your Check In form
Contact Us for more information on our new Oracle Desktop-Style integration for Adobe Acrobat



Adobe Acrobat - Native WebCenter Content Integration now available

Now you can work with PDF documents and metadata in Content Server seamlessly, without leaving Acrobat - and with ultimate simplicity

Search WebCenter right from within Acrobat... Check-Out and open for editing with one click flat... Check-In new revisions transparently on every save or Ctrl-S... Use Smart Form Assistant to fill metadata and never type the same values again.

Eliminate These Records, Compliance and Data Integrity Risks:

Your Documents are no longer left on the file system after check in. Check-In is completed with one click and revisions are no longer missing - even when users are interrupted and would otherwise forget to complete the check in. Correct content contributed - when users are not forced to browse for a file to check in - they won't accidentally pick a wrong file. Download Whitepaper to learn more

Check Out in One Click Flat

  • Locate, open and save PDF documents without leaving Acrobat or CS - just like the stock MS Word Plug-in
  • Recent Files in Adobe transparently open or check out files from their appropriate location - ECM or local file system
  • Recent Files from ECM Server are also conveniently segregated in a menu and can be checked out and open with one click
  • Three powerful ways to locate documents in ECM:
  • Quick Search
  • Advanced Search or
  • Browsing Folders
  • Recently used folders are saved for nearly one click Check Out

Check your changes in - instantly

  • Check In new documents directly from Adobe - by using Check In Form or by browsing folders. Recently used folders are saved for nearly one click check-ins
  • Recent values entered in the form on Check In are saved automatically and can be recalled with one click
  • Saved Personal Metadata Sets for quick check in with metadata
  • Create PDF portfolios by dragging and dropping documents directly from ECM search results
  • Drag and Drop Metadata from the PDF document itself or ECM Folders - directly onto the check in form - for quick check-ins with quality metadata... Download Whitepaper to learn more

Save about 30 min of peak productive time per user per day - every day

... by eliminating repetitive manual steps required for every check-in and update.

Save a lot more time when working with PDF Electronic Binders:

"Electronic binders often have over a dozen documents in them and - without Integration - these manual steps need to be followed for each document."
- Stephen Madsen, Program Manager, Alberta Agriculture and Rural Development

Our Adobe ECM Integration Plug-ins do not require an installer. Simply unzip the content of a distribution package into Acrobat's plugins folder and restart Acrobat to use ECM Integration features.

Contact ECM Solutions for more information.

Saturday, December 20, 2014

A better approach to content migrations brings $2 Million in savings and 20 times better use of Senior Resources

We got recently engaged by the cleint who has 47 Business Areas, that are still storing data on shared drives. This brings up a set of challenges with controlling access and revisions, finding content and other known issues.
Migrating content over to Oracle WebCenter will make content much safer, easier to manage and let Business Areas benefit from retention management, conversion and other features available in Oracle WebCenter, so the decision has been made to complete Migration Project within 1.5 years.
Each area was taking about 4 months of a Sr. Information Management Professional and one IM Support Person. At that rate, the entire migration was going to take 16 years unless new people are added. Scaling up was also problematic due to complex manual process required for mapping old file locations to the new file plan-based locations and other verification and migration activities. It also required highly trained staff due to big probability of error with possible severe implications.

Solutions

ECM Solutions developed an innovative mapping and migration tool that removed the complexity and reduced possibility of error thus reducing the demands on the operator, so now it is much easier to bring additional people to scale up the migration.
The time required to migrate was reduced down to about 1 month of an IM Support Person and just 0.2 month of Sr. IM Professional is now required for supervision. That’s 4x improvement for Jr. IM Staff and 20x improvement for Sr. Stuff.
It is now entirely possible to migrate all of ARD content into WebCenter in allotted 1.5 years and realize improved security, auditing, revisioning, search, metadata, conversion and other benefits of Oracle WebCenter.

Outcomes

Here're some specific numbers to illustrate the outcomes of implementing the new tool:
  • Business areas left to migrate: 47
  • Estimated cost using Excel, manual mapping: 4 months of 2 people per area ... about 63,200 man hours
  • Estimated cost at $50/hr of Sr. IM and $30 of Jr. IM - $2.5 Million
  • Estimated cost of migration using the tool: 1 month of Jr IM and 0.2 months of Sr. IM
  • Estimated cost at $50/hr of Sr. IM and $30 of Jr. IM ... $316,000 ... Savings of $2.2 Million
  • Net savings (including the cost of development) - About $2.1 Million
These numbers could be even better if we factored in the benefits of improved quality of mapping and the savings brought by using Oracle WebCenter Content - that will be realized much sooner. More savings still to be realized with significant quality improvements and time savings on the IT side when comparing the use of the tool with previous Batch Loader, IDC Command and Database Scripting - based migrations.
With using Excel - based manual process, it would take up to 8 additional people to migrate 47 business areas in 1.5 years. These people need to be hired or pulled off their current projects. All 8 people will also need to be trained before they could begin.
With the help of the new Migration Tool - 2 people that are already dedicated for the project will be able to complete the migration process on time.
Contact us to see how a custom migration tool can translate into quick, significant savings for your organization

Friday, December 19, 2014

Database Professional's Introduction to Records Management

ToadWorld has just published my quick introduction to Records Management specifically geared for database professionals. If you or a member of your team is jumping onto a Retention or Records project - this will bring you up to speed and have you talking the same language as your business folks in no time. The article is available here

Tuesday, November 25, 2014

Data Capture Market Of 2015 - Navigating Competitive Landscape - Part 3

In this article we will complete our review or competitive Data Capture Market of 2015 and our in-depth exploration of features and considerations for selecting your ideal, most effective solution.

OTHER SELECTION CONSIDERATIONS

This section will bring your attention to two additional considerations that come into play when switching or enhancing your Document Capture solution

REASONS FOR SWITCHING PARTNERS

When you have an existing solution in place that has proven inadequate, be sure to 'do your homework' and understand if that's a lack of features issue or a delivery/vendor/people type of issue or maybe it’s something else.
That could then help you narrow down a list of potential vendors and quickly eliminate those who may potentially have the same type of limitations. You may also opt out for a different partner or integrator and stay with the same vendor, which may be a lot more cost effective then ripping apart the solution and starting from scratch. Many systems will look good in demo and get implemented but you have to be confident that the vendor will be there for you and be able to help you when issues come up.

A WORD OF CAUTION

And now I've saved the best for the last. To paraphrase the God Farther,
"One BA with a pen will outsmart 20 developers with latest computers"
A capture solutions can quickly get expensive, so it is vital that you only consider and implement the requirements that will really make a difference for your business users. Use the 80/20 rule where 80 percent of the benefits will come from just the 20 percent of the features. Yes, it’s also critical to pick a solution that will continue to support you as your organization grows and add features and volumes, but it’s best if you don't have to invest in all 100 percent of features and modules day one, just as it is the case with Oracle WebCenter, where you can start on relatively small footprint and have the solution expand on as needed basis.

ORACLE DATA CAPTURE TOOLS

And now that we have a system for evaluating vendors, it’s time to take a look at Oracle product offering - Oracle WebCenter Imaging, Oracle WebCenter Capture and Oracle Forms Recognition

ORACLE CAPTURE

Oracle Capture excels at Image Enhancements, flexible import tools, manually assisted indexing and generation of searchable PDF/A. Documents can then end up in Oracle WebCenter Content or WebCenter Imaging - for additional image manipulations and image-centric workflows.

IMAGE ENHANCEMENTS

Oracle Capture offers a rich set of Image Enhancements (See screenshot below) that can be applied during scanning or to imported documents:



MANUAL ASSISTED INDEXING

Oracle Capture allows you to fully automatically extract bar code information, convert and check in document without any user intervention, but when additional processing is required, it makes it easy to map specific areas of each type of document to metadata fields where you'd like the information to end up (See screenshot below)


Oracle Capture then offers you rapid customization facility where you can implement additional validation, database lookups and automate all aspects of automatic or manual-assisted indexing.

REAL LIFE STORIES

Below are two real life examples that will illustrate the types of scenarios here Oracle Capture is at its best:

DOUBLED PRODUCTIVITY OF DOCUMENT CONTROL PEOPLE

One of the projects we recently completed was automating workflow and document control for a nuclear power station operation management provider. As part of doing business, they exchange a ton of technical specifications and formal transmittals with their government clients and sub-suppliers. Prior to implementation, information was duplicated, not easy to organize and retrieve and they didn’t have a structured workflow process.
Before using Capture, average time spent per document was benchmarked at 2 min 17 sec – on a 15 field check in form. With using Capture for extracting verifying and correcting pre-filled values – time spent per document dropped to just over 33 seconds!
That’s over two hours of added productive time (out of estimated 4) per Document Control person per day – a 200% increase in productivity!

75% PRODUCTIVITY INCREASE OF CLAIM PROCESSING CLERKS

Another our recent project I’d like to mention is the one where we’ve automated document storage, acquisition and workflow for a mid-sized insurance company. They have significant volume of incoming claim-supporting correspondence – mostly scanned documents coming in by email.
Over the years, they’ve tried more than once to extract meaningful metadata fields such as claim number and policy id – on check in – with various degrees of success.
They’ve also relied on a custom, outdated component that automatically checked in incoming email attachments. The component was buggy and unreliable.


Implementing Oracle Capture allowed them to finally retire component and begin to extract important metadata values out of incoming documents – during the check in process. Time spent per-document dropped from an average of 1.5 minutes to just under 22 seconds!
And the unsupported and buggy custom component was replaced by Oracle Capture Import Server.

ORACLE FORMS RECOGNITION

Oracle Forms Recognition intelligently locates and extracts unstructured data from a broad range of documents, including purchase orders, remittance, freight bills, and healthcare claims, reducing costly, labor-intensive manual processing tasks and errors. It will allow you to easily grow document processing volumes across applications with a scalable forms recognition system.
OFR can integrate with Capture and provides intelligent classification, data extraction and matching. The strongest sides of OFR are rule-based forms recognition and integration with other Oracle products like the EBS.

INTELLIGENT FORMS RECOGNITION

OFR can be configured to recognize a large number of document types and extract multiple information fields. For instance, when processing invoices, it can extract the header information as well as the line items from the invoice itself - and see if the total of all line items adds up to the total specified on the invoices. If business rule exception occurs, users can easily correct the situation by firing up the Verifier app. The screenshot below shows how an approval stamp on the invoice has obstructed the last line item, which triggered an exception (See screenshot below)


Use can then add a line to the list of extracted lines on the invoice, which will clear the exception.

ORACLE EBS INTEGRATION

Once the information is extracted, it can be sent to Oracle E-Business Suite while the document  itself can be stored in WebCenter Images. (See screenshot below)



The document is then readily available from within the EBS with Attachment functionality - and can be viewed and annotated without leaving the EBS.

SEAMLESS END TO END PROCESS AUTOMATION

The key strength of Oracle Capture Tools is not only their industry leading features like Image Enhancements in Capture and Automatic Classification and Intelligent Extraction in OFR, but their seamless integration with other Oracle Products, like the EBS and PeopleSoft.

From Capture to Extraction and Indexing to Processing and Managing to quick and easy access - these products designed to work together - and fully backed up by Oracle Support.

CONCLUSION

While marketing data and high level observations are readily available from various sources across the Internet, businesses are often fall victims of hype and complexity and end up making costly mistakes when it comes to selecting and integrating components of their Document Capture system.
In addition to (yet another) brief market review, this whitepaper provides an in-depth look at the key 'moving parts' hidden behind even document capture solution and provides complete and  easy to follow checklist, that helps organizations to quickly turn vague intangible requirements into specific actionable steps - steps that make selection of vendors, tools and components easy and fail-proof.

Tuesday, November 18, 2014

Data Capture Market Of 2015 - Navigating Competitive Landscape - Part 2

This is the second part of the article designed to help you see the real strenghts and weaknesses of complex Data Capture solutions of 2015. We will start looking at specific features of Data Capture tools.

SCANNING

Your Document Capture solution may or may not include scanning. If it does, the first thing that to consider is Volume. 50 scanned documents per week vs. 100,000 scanned documents a week make a world of difference.

VOLUME CONSIDERATIONS

Another thing to consider when looking at your intake volumes are two additional facts - the periods of peak or irregular workload and your anticipated growth rate for incoming documents.
•    Is your organization affected by seasonal, external or periodic surges of incoming documents that need to be ingested quickly? And
•    Is your estimated intake volumes are likely to grow in the future and at what rate? Are you planning to roll out your Document Capture solution to other departments?
When volumes are low (and stay low), almost any device that delivers sufficient image quality may do, and you may be able to capitalize on devices you already have. You may use your MFPs and desktop scanners. (Be sure to look at my cautions later on in the paper when picking your scanning devices)
Now here're a few more other important considerations that will help you better understand your scanning requirements:

TYPES OF DOCUMENTS

How many types or classes of documents are you planning to handle? A few well defined types of documents will make it easier for you to extract data out of the documents, easier to classify and process. A 'mixed bag' or different types of documents in one scanning batch takes you into a whole different ball game.
How complex are the documents and how are they received? In some cases, you only need a couple of values from a nice high-resolution document. Sometimes, you need more complex algorithms to get data from complicated, variable or low-quality documents like faxes. Is your information structured or un-structured?

TYPES OF PAPER

Different types of paper may require different types of scanners. Bank checks need to be fed into the scanner in a consistent manner, staples and paper clips must almost always be removed, heavier or thinner paper may require different processing. What does the scanner do or react when, there are holes in the document or paper clips left over, a document is wrinkled, folded or torn?

OFF-SITE CONSIDERATIONS

Can you allow the documents to be taken off-site? In some scenarios, such as highly unstructured content with a lot of information to extract, handling handwriting or extracting information out of drawings and charts, a lot of manual labour may be required and outsourcing may be an attractive option to consider.

IMAGE ENHANCEMENTS

When you use a smart phone or other device that is not specifically designed to work as a scanner, the quality of the image you can produce becomes critically important and - and you'll need to consider the use of image enhancements.
Image Enhancements such as de-skew, de-speckle, removals of shadows and others come as a separate feature  (or a product) as part of the scanning solution. For instance, Kofax offers its famous Virtual ReScan tool and Oracle Capture includes Image Enhancements as part of the main product.
While most of the times using of Image Enhancements will be desirable, you need to be very careful when using them with legal documents, as a missing comma (removed by despeckle) could have potentially bring on serious consequences.

PROTECTION AGAINST JAMS AND DOUBLE FEEDS

When scan volumes are high, additional level of intelligence in the scanner may become a necessity.  Does the scanner provide effective paper separation to reduce the chances of double-feeds in the case of poor document preparation? Does the scanner have technology to proactively detect possible paper jams and avoid damage to original documents?
Additionally you may need to look at detection of paper-clips and staples. Paperclips can cause jams, multifeeds and cause damage to electronic components in scanners. Does the scanner have a method to detect accidental paperclips? Staples can scratch glass causing streaking on images, so does the scanner have technology to immediately stop scanning to avoid damage?

OTHER FACTORS

Other factors may also come into play with scanning. For instance, high humidity causes the sheets of paper to stick together and the same stack on the same scanner might experience more double-feeds in one location versus some other location.
If you get rust left over on paper after removing staples or paper clips, that may very likely scratch the glass on some scanners and cause ongoing problems, so you may opt out for a higher end scanners, such as Microform scanners from Germany.

OTHER INTAKE MECHANISMS

Many times the majority of documents come in as email attachments, by fax or they are imported from a shared drive or a content management system. If this is your case, do not print and re-scan them just to take advantage of the scanning set of features in your software as many products are available for direct import of documents, offering significant savings on printing and scanning costs.

DATA EXTRACTION

Extraction is the next step you take after the images were captured - via the Scanning or Import feature. The critical question here is - "what information do you wish to extract?"
For example, is it the entire page or just some particular information like the Invoice Number needs to be extracted. This is very important because there is a big difference between extracting all data from an image or just the relevant data.
Assume you were capturing a legal contract. Most likely you don't need to all of its legal terms in metadata fields and the only fields you'd like to extract are date and the company name. Now the complete images of all pages of the contract will most likely be stored in your Content Management system and you may also generate Searchable PDF and use your Full Text Search Engine for additional search convenience - and these additional ways to access your scanned information may relax your extraction and classification requirements.
The following section will highlight additional aspects that may be relevant when examining the extraction tools:

TEXT OR GRAPHICAL INFORMATION

Another important consideration is the fact that most extraction tools out there are best at dealing with text based information. Now if a significant portion of your content is graphical, like  maps, drawings, graphs, that need to be automatically classified and have data extracted. Text based OCR systems seem not to work very well, so this becomes an important question you'll be asking your sales rep during the selection process.

VOLUMES

The same considerations regarding your target intake volumes, volume patterns and expected growth that we looked at in the Scanning Section above will also apply to your Data Extraction. They will drive your selection of manual, manual-assisted of fully automatic extraction tools. In addition, they will also influence your hardware and infrastructure decisions.

CLASSIFICATION

Each type of document may have different extraction requirements. You may need to extract different fields, apply different validation rules and store documents differently.
Do you have an easy way to separate these types of documents before they enter scanning or import phases or you'll be relying on your software to classify the documents for you and handle them differently?
Also, will you be able to embed bar codes in your documents to make it easy for your software to classify them - or this option is not available?

CUSTOM BUSINESS LOGIC

Custom logic will let you apply field-specific validation rules, perform database lookups to validate against the list of pre-defined values, treat values as numeric only or alpha only and apply additional business logic that will improve your accuracy and reduce (or eliminate) the manual component of data extraction. Many tools, like Oracle Capture and Oracle Forms Recognition, offer elaborate APIs and allow to perform all of these functions, but other tools may not offer this facility or may not be flexible enough for what you're trying to accomplish.

IMAGE MANAGEMENT AND CONTENT REPOSITORY

This is the third largest aspect of any Document Capture Solution and it answers the question of "Once the information is captured, classified and the metadata values are extracted, what happens to it then?" This addresses all of these other aspects of version control, indexing, management and distribution, cleansing, publishing, search and controlled retention.
Now the process of selecting a Document Management System is outside of the scope of this whitepaper, but I'll bring your attention to a few additional capture specific considerations that often get overlooked in the process:

IMAGE ANNOTATIONS

Watermarking, annotation and other image augmentation is not provided by many general purpose CMS but they are provided by Oracle WebCenter Imaging. You'll be able to apply watermarks and annotations and also link them to specific workflow steps. Now with WebCenter Imaging your original image will still be available, which may be important for legal and compliance reasons, and that may not be the case with other image manipulation tools that will in fact be applying watermarks to the actual image.

RECORD AND RETENTION MANAGEMENT

Retention will let you flag outdated content for deletion or archival, synchronize physical and electronic versions of content and keep track of the physical files, folders and boxes floating around your offices. Retention Management will also help you more effectively share content across organization and ensure it is retained for the period that it should be – regardless of the source or the system that stores it.
You may be required to ensure regulatory compliance (SOX, SEC, industry regulations) by keeping the content for required periods of time and consistently following the processing rules, such review or archival and destruction – when this period is over.
Systematically disposing of certain types of documents, when you’re no longer required to keep them – may prove extremely beneficial in case of potential litigation. You may also choose to freeze some important content, related to legal action you’re about to take – so it doesn’t get “accidentally” destroyed by a disgruntled employee.

VOLUME AND LOAD PATTERNS

Volume and load patterns still play a role in your content management, just as they do in your capture process.

OTHER REPOSITORY CONSIDERATIONS

Other Repository Considerations like Security, Conversion and Workflow are all important to consider when selecting new content management system.

LICENSING CONSIDERATIONS

And here's the big one. While every situation and every deal is different, the following considerations will be important to consider and will apply in most cases

PER-DOCUMENT CHARGES

Some vendors (Kofax being a prime example) will charge you different fees based on the volume of documents you will be scanning and processing. If you're dealing with large volumes or are expecting your intake volume to increase, such solutions may quickly become prohibitively expensive and you may want  to opt out for a solution that offers server-based licensing without the per-user or per-click components, such as Oracle WebCenter Suite.

RESELLER PRICING

In many cases Value Added Resellers enjoy special savings and buying form them may prove more cost effective then dealing directly with the vendor.

That's it for today. In the next part of this article we will complete our review of features and considerations of Data Capture tools of 2015

Friday, November 7, 2014

Data Capture Market Of 2015 - Navigating Competitive Landscape - Part 1

These articles will make it easy for you to navigate competitive world of vendors, products and prices of Data Capture Market of 2015.

DATA CAPTURE MARKET OF 2015

Data Capture market continues to expand. According to MRI, Automatic Data Capture segment will grow by 12.29%, in the period starting 2012 to 2016 with one of the key factors contributing to this market growth being the increasing need for accuracy in data computation.
Kofax, ECM Captiva, IBM Datacap and KnowledgeLake continue to lead the market in the image processing part and Microsoft, Hyland Software, Open Text and Lexmark' Perceptive Software are named as overall ECM Market Leaders by Gartner as of September 2014.
Below is a copy of Gartner's Magic Quadrant for Enterprise Content Management:


THE DILEMMA

Now if you're considering a new or a replacement data capture solution for yourself or your client - you'll be facing a dilemma - go with one of these market leaders, dig deeper and look for the 'perfect match' or simply try to get something without paying a whole lot. The first solution may save you time in decision making stage and seem like a good choice, as the story goes, "nobody was fired for hiring IBM".
Some clients always try to go with the cheapest solution and many end up in the proverbial "'Penny Wise.." trap when money saved on licensing and support costs are quickly drained by the cost of custom development. Yes, open source data capture tools like Ephesoft do exist and may serve as a sound alternative, but you need to fully understand the strengths and weaknesses of these tools and be making an informed decision.
This whitepaper will quickly take you to the next level of understanding and reap the benefits of a 'perfect match' type of tailored solution. This may come in especially helpful considering the fact of, as Gerald Baker of Encyclopedia of Document Management said,
"We find that suppliers tend to shy away from simple descriptions of what their software can and cannot do - words like "fastest","rich in features", "world's leading" and "multi-function" add little to clarify exactly what the product offers."

SELECTING DOCUMENT CAPTURE SOLUTION

Document Capture has many aspects to it, and each of them has its own leaders, effective and not-so-effective approaches and client-specific considerations.
Let's start by looking at the overall Content Lifecycle: (See diagram below)


Picture Credit: "From Unstructured to Strategic: Enterprise Content Management with Oracle WebCenter"

The first three stages represent the key activities of Document Capture - and they represent the key areas of concern:
  • Scanning,
  • Extraction and
  • Image Management.
Taking a deeper look at each of these key components will help you better understand your specific needs and see what solution (or a combination) may offer a better fit.

In the following articles I'll walk you through the specific areas of concern, that will make it easy for you to see the strengths and weaknesses of each particular data capture solution.

Wednesday, September 17, 2014

Eight times faster search experience ... same hardware, software, network


We've recently worked on improving the search experience for one of our clients, who use an Enterprise Content Management System to store their documents. Yet again we were able to achieve some impressive gains in user productivity without having the client invest into new hardware, network or vendor products. 


The 'pain points'

Existing CMS presented information in a linear fashion. To find information, users were following these steps:
  1. Populating their search criteria and running a search, 
  2. Browsing through pages of results, 
  3. Looking at Content Information and/or Document Preview on selected items 
  4. Downloading or updating selected documents

We found that the users were forced to repeat steps 1 to 3 multiple times, because the search form was getting replaced by the search results and search results pages were replaced by the content information pages - (see diagram below )



This sequential way of organizing the process was forcing users to go back and forth in their browsers, having to wait for pages to be reloaded many times in between. Because of this, most searches were taking between 2 and 5 min - from the time a user has initiated a search - to the moment when the document was finally located.
The server was also taking the time to respond (2..30 seconds for most pages) and because of the constant need for going back and forth - these response times were the first item on the user complaint list.

Users were unhappy with the server response times, but the real reason of their poor experience was bad information flow.

Another thing we found is that the use of screen real estate could be greatly improved:




Out of the box Search Results were only able to fit 20 rows per screen... and nothing else. Green on the screenshot above shows the useful space in out of the box Search Results screen.

Another issue that further slowed users down and made them lose focus and wait, were the use of pages in search results.  Pagination may work well on a web site, but in an enterprise app it can be very counterproductive. Here's why:
Going to the next page involves waiting. 
Mass update operations that span multiple pages have to be performed multiple times (separately for each page)


Not only these repeated mass-updates created more waiting but they have also introduced a greater risk of error – as users had to repeat update instructions for every page





The new UI 

So here's the new UI that we built (I'm masking parts of the screen to hide client's data):



Just one screen is replacing all 3 steps, that were slowing the users down. All 3 steps - filling the criteria, browsing through the search results and reviewing the details - all fit nicely in just one screen.

And the best part of it - iterative searching! 

Even though this new interface uses the out of the box content information screen and the same middle tier code to get search results, this new way of presenting the information created a net new way of using the system and took users up to a new bracket of productivity.

Because the search criteria is always shown on the left - the user can change or further refine their search without losing their criteria or having to go back. They immediately see the results on the right.

Another feature they found to be very helpful was the seamless scroll powered by the lazy load feature. As you continue scrolling down - the grid requests additional records from the server, so you never have to lose your focus and wait for more records to load. And it makes their mass updates a breeze.


Summary

With original UI users had to constantly jump over 3 sequential, slow- loading web pages to find the information they need. The new Search interface eliminated waiting and counter-productive fragmented views and also allowed users to iteratively refine their search criteria.


Task Time Improvement

  • Task time before - average - 3 min
  • Task time after - average - 30 sec

ROI Calculations

  • The average time savings per task - 2.5 min
  • The average number of times a task is performed each day per user - 9
  • Total average time savings per user - 22.5 min
  • Total number of product users - 29
  • Total average time savings for all users - 652.5 min or 10.9 hrs
  • Average hourly pay or system users - $50
  • Average daily savings - $543.75
  • Average annual savings - $137,000
  • Typical product life span - 3 years
  • Total estimated savings over the products lifespan - $411,000
  • The cost of implementing new search UI - $19,500
  • Net Productivity Savings - $391,500
  • ROI on improving productivity - 2000%


Friday, September 12, 2014

Two person-weeks invested in a UI update bring over 300K in annual savings

… and this is not an unusual result!

In our recent projects I've started to pay more attention to everyting UI, getting great results every time. I haven't seen that kind of savings and productivity improvements from any other type of technology initiative. Think about it. It's hard to find another area of the enterprise that could bring that big of a return... on these small investments! 

'Middle tier' developers are keeping an eye on java, code, HTML and scipts, refactoring and keeping it fast and efficient. DBAs are on alert about everything that could be optimized in your Oracle databases. but hardly anybody tracks how people are using your system! What features are used the most? Which create the most trouble for your business guys? And which features they are looking for? (Not asking for  - that's a different thing!)

When it comes to the ERP, CRM, ECM and many other ‘back office’ systems that most businesses operate - IT managers tend to turn away from everything UX. They don’t have the budget, time or motivation for engaging creative teams, conducting usability studies and subsequent application rewrites. 

Now if you ask them over a cup of coffee - they agree that things like better layout and different screen flows would make their business clients more productive, but there’s no way they can afford things like concept development, focus groups or information architecture. If a company ever goes for a new web site, that’s a different story. But these internal stuff that “nobody sees”...

Even worse, users are known to blame themselves for any hardships they experience with any type of technology, so - chances are - you'll never know about these awesome gems of potential productivity - unless you examine them the right way.

Now let’s look at client's example. They operate a content management system used by 10 people and it takes them about 30 min on average to assemble a batch of content for workflow review. The  process is repeated about 5 times a day. After investing a week of development and a week of QA,  that process now takes an average of one minute instead of 30 min. The total savings for their organization work out to about $300,000  (Savings total up to 25 hours a day across all users, and an average pay of $50/hr - your annual savings come up to 300K)

On top of that, their development cost was zero because they had idle developers “on training” between the projects. And even if they were to hire a couple of expensive consultants from their ECM vendor - the first year savings alone would still make the whole thing totally worth it. 

Now there’re an even more profound benefits to be gained from our little productivity boosting mini-project:

Users no longer hate their Content Management system. They will now attach a link and store a file in the CMS instead of attaching a copy of a file. This eliminates duplicate and outdated copies, results in storage costs and speeds up their backups

Business users no longer have to spend a large portion of their day doing tiresome, repetitive work. Their mind frees up and they focus on marketing strategies - the job they were hired to do. This results in more, better quality marketing campaigns, more generated leads and better conversions

I can continue on but I think you’ve got the point.

So my question to you is - do you personally - and your organization - care about the UIs your business guys are using? Have you seen similar results? Do you think our results are typical?

Please let me know what you think 

Friday, July 18, 2014

Database Expert's Insider Guide to Enterprise Content Management Systems - Part 2

Part 2 of my article is now available on Toad World. Check it out here. You may also like other articles there. Toad is an awesome (free+) tool with a great community, and it attracts some serious database minds.

Saturday, July 5, 2014

Database Expert's Insider Guide to Enterprise Content Management Systems

If you'd like to bring your database guy up to speed on the core concepts of Oracle WebCenter and the ECM in general - I've just wrote a nice article for Toad World that would come in very handy. Check it out here. I'm  looking all the typical features of an ECM system and concepts, that will help them to get comfortable and to get started with the WebCenter. If there's anything I missed - please let me know. The link is once again below:

http://www.toadworld.com/platforms/oracle/w/wiki/10943.database-expert-s-insider-guide-to-enterprise-content-management-systems.aspx


Tuesday, June 24, 2014

Archiving In Webcenter Content - A Deeper Look - Part 2

Welcome back! Now that we "took a step back" and re-examined our understanding and our strategies, let's look at the tools available to us in WebCenter Content.

Archiver

Just as a name suggest, this is the tool designed to 'Archive' your content. But what does it mean? In this specific case of Archiver, it allows you to:
  • Remove large amounts of content from repository and compress it for backup storage or manual deletion
  • Transfer content to another Content Server instance in manual or automated fashion
  • Perform mass updates of metadata values, which may come very handy in moving content in another 'bucket' in your repository or tagging it.
You start by defining a query to select content you're going to move (See screenshot below)


And then you can define simple transformations like field and value mapping and setup manual or automatic replication to other Content Server instances (See screenshot below)



Expired Content

Expired Content provides an effective mechanism for taking automatically content out of the repository on a specific date. You could simply specify a value for Expiration Date during Check In or metadata update and when that date arrives, your content will no longer come up in a search. (See screenshot below)


You could then find expired content using Expired Content page (See screenshot below)





Please keep in mind that Content Expiry mechanism is rather radical solution because following the Expiration Date these items will not show up in searches and links to the item's web location will no longer work.

Retention Queries in Framework Folders

New Content Server Framework Folders component replaces the older Contribution Folders also known as Folders_g. They introduce a number of additional features like removal of the 1000 items per folder limitation, better performance and new Retention Folders feature. (See screenshot below)



As the name suggests, it lets you define a query and apply simple retention policy on its results - how many revisions of these content items you'd like to keep and for how long (See screenshot below)



One thing to keep in mind is that if you have Records component enabled and a content item is assigned to a Retention Category in Records - that retention will override the period you specified in a Retention Query - and longer retention period will be used.

WebCenter Records


WebCenter Records is the ultimate retention management tool with tons of flexible features to define complex rules, time periods and disposition actions, that can match virtually every organization's Records Policies. It will also let you keep track of your physical assets, perform complex audits and much more, too much to describe here.

Records Adapters

WebCenter Records also offers Adapters - software components that allow you to extend your retention policies over the content that is not physically stored in WebCenter Records. You can apply retention to content stored on your File System, Documentum, FileNet and MS SharePoint. WebCenter Adapters obtain all policies from Records server, that manages the retention and apply them to content stored in the corresponding system. The Adapter also sends information back to the Records server so it can maintain a complete and up-to-date catalogue of the enterprise's important Content. The screenshot below shows results of Federated Search across the Content Server and linked repositories: (See screenshot below)


This lets you apply your retention policies to content more consistently with less administrative effort and less disruption for users – without the need to migrate content out of your existing systems across the enterprise.

For instance, you can freeze items in SharePoint and apply retention by search criteria and item locations.

Common Problem Areas

Now that we looked at all aspects and various approaches to archiving, let's take a look at common problem areas that often make organizations go with less than optimal retention policies and archival strategies

Your Physical Assets

Many organizations are still forced to manage large amounts of paper documents and other physical assets. These assets are not as easily accessible and manageable as their electronic counterparts - and that many times results in artefacts being stored longer then needed, lost items and other similar issues. It is often beneficial to review and eliminate your physical assets that approached their end of useful life or consider scanning them and retaining as electronic copies.

Collaboration Tools


Collaboration Tools like SharePoint and Wiki almost always result in large in-flow of user-generated content - that is not very well structured and ages quickly. Most organizations don't have effective retention policies in place to manage this type of content - to eliminate outdated items and keep content easy to find and relevant. WebCenter Records' SharePoint Adapter may come in handy here as it allows you to define and apply retention policies to your SharePoint repositories.

'Smart' content expiry


As I mentioned earlier, using Expired Content feature with web content, such as your public sites and intranets, may result in broken links as expiring items are taken off the repository. You can plan for this by running an 'Expiring Content' query ahead of time and update the links that would otherwise get broken, or you could opt out for using other mechanisms of archiving such as metadata updates.

On the other hand, routinely running a broken links report is another good practice to implement.

Performance Issues


Many times when you system slows down, the first complain heard form sysadmin is "we have too much content". Sometimes it’s true but in most cases system slows down for two other reasons then approaching its maximum capacity.

Assessing Performance

Here're a couple of number for you to use a guideline. By no means your system must match those exactly as every configuration is different, so these are just some ballpark figures for you to see where you’re at.

Read-only requests - like the page loads - should pump out about 20 requests per second per GHz of CPU times the number of CPUs. Dual 2Hz box should pump up 80 pages per sec.

For raw requests such as content information or calls to the GET_FILE or Check In (for small files, say, around 200K) you should target 4 requests per GHz of CPU times number of CPUs in the system.

Once again, these are just a guideline and the numbers greatly depend on file size and many other factors.

Oracle WebCenter Content is an I/O limited application, which means that overall performance is mostly restricted by the speed of your hard disks, not the software itself or the number of items you have in your repository.

Conclusion

In this short series we revisited various practical meanings of the word 'Archiving' as it applies to Content Management in general and specific features of Oracle Web Center. We looked at the common problem for you to keep an eye on and the set of tools that WebCenter Content offers for implementing your content retention and archiving policies.

Tuesday, June 3, 2014

Archiving In Webcenter Content - A Deeper Look

This article develops understanding of practical meaning of the word 'Archiving' as it applies to Content Management in general, practical business applications and ROI of your specific Web Center implementation. Participants will review different aspects of archiving - obsolete, hidden, cut off, deleted, information architecture and physical design of archiving systems as they consider a compromise between the cost of storage and speed of retrieval

Introduction

WebCenter Content offers a great set of tools we can use for archiving. It gives us Retention Management, Content Expiry, Replication and many other features and tools, but are we using them efficiently? And what does Archiving mean after all?

Back in September Deane Barker of Blend Interactive brought up a controversial topic in Content Management Professionals Group. He asked:


"'Archiving' is a popular word in content management, but what does it mean? Is there an accepted definition?"

Most of us remember the days when 'Integrated Document Archive and Retrieval Systems' or IDARS was a large market segment in its own right. Tape archives and off-site storage were there first things that came to mind. But what about the 'News Archive' section of our corporate intranet? Should it be even called an 'Archive'? And what about an Email Archive? Now what should we do with historic account statements for our customers?

All valid questions. The questions I suggest we should answer before looking at the wealth of tools that are available to us as part of our Oracle WebCenter Content license.

So may I suggest that you pause for a quick second and ask yourself this important question - the question that Deane has asked the crowd of just over 27,000 information management professionals back in September:

What does 'Archiving' really mean?


Which one of these options comes close to your organization's definition of 'Archiving'?

  1. Does it mean that you simply move your content to 'Archive' section, just as so many web sites now do with their 'News Archive' section? The content is still visible to the public, but simply moved to another section?
  2. Does it mean that we restrict permissions to hide the content from public, mark it as 'archived' but still leave it in its current location to stay accessible by content contributors?
  3. Or does it meant that we move the content to another location, still viewable by contributors but not publicly accessible and not in its original location?
  4. Or maybe it means that you move it to some backup medium and completely remove it from your WebCenter instance?
  5. Or does it simply mean "deletion of outdated content"?

Pondering this will help you get the most value - not just out of the tools that come with WebCenter, but infinitely more importantly - out of the business content that your WebCenter Instance is tasked with managing in the first place! So pause now and please do your thinking before continuing to read.

Archiving as means of addressing system limitations

Back in the late 1990s on we had little control over how CMS systems displayed content, so we were forced to 'archive' stuff to prevent it from showing up on the site. These days are long gone but even now we're still facing that type of limitations.

Here's one scenario that still forces people to 'archive' content stored in Contribution Folders - the notorious limit of 1000 content items and/or child folders per contribution folder. If your system uses Contribution Folders (as opposed to newer Framework Folders) - Folgers_g actually throws csCollectionContentMaxed exception when trying to add more than 1000 items to a folder.

Now, few people are aware of this, but you can actually solve this by increasing the limit by using configuration variables. So that prompts another question:

If there's no foreseeable limit on the number of items you can easily store in repository and you can simply change permissions to remove outdated content from public view - do you still need to 'archive' anything?

With the cost of storage going down year after year and ever faster searching tools - the pressure to take content out of repository just to improve performance has largely been solved. We seeing this trend all around us too. Gmail has increased its free mailbox storage space from 1Gb to 15Gb in less than 10 years so you hardly ever are forced to delete emails. 'Soft Delete' or 'Archive' as Gmail calls is what users now use instead of delete.

'Soft Delete' features

'Soft Delete' effectively removes content from the active view in the system, but it can always be accessed by search.

And that creates another problem! The problem that comes from proliferation of irrelevant content.

Relevant vs. Irrelevant content

Ability to find or restore anything is great but having multiple (including outdated) versions of important technical specs may get in the way of effective communication. The famous out of control SharePoint repositories are the first example that comes to mind.

You don't have to remove content, but that doesn't mean that you don't need to systematically identify content that is now less relevant. If the content is less relevant it could be tagged or moved to an archive location. If it isn't relevant at all, it should be deleted or sent down the pipe of your disposition process.

A discontinued product that no longer being sold doesn't need to show on the web site. Archiving keeps the information available to content managers in case of any legal/ compliance issues that may arise. And potential litigation is another great argument to get rid of the content that you are no longer obligated to keep.

Defining your governance

Now every organization is different and so is the decision making process when it comes to defining the life cycle of your content. Whether it is a single person or a cross-functional content steering committee that makes your governance decisions - your Information Management Team needs to have clear directions and a well defined life cycle for each type of content.

Applying Retention Policies

You need to define what types of content your system is managing and what 'archive' and 'delete' means for each type - and how it is carried out. Your corporate retention policy is key, and your legal colleagues should be part of the discussion.

With that in mind we should be ready to look at the tools available in WebCenter Content to make it easy for you to apply your content life cycle decisions


That's it for now. In the next article we will look at the set of tools available in WebCenter Content and how to best use them to implement our archiving strategies.

Monday, April 21, 2014

Responsive Design - Practical Tips for Web Center Developers

This article will quickly make you comfortable with Responsive Design principles (RDP) and how they apply to your Oracle Content Management system. I'll also show you some real world examples of the reasons for and impact of implementing the RDP

INTRODUCTION

RDP is an approach to designing HTML applications that look well on a wide variety of screen sizes, platform and screen orientation - as opposed to Adaptive Design - where the user's browser is detected and she is bounced to a "Mobile" site, that in many times look awful on modern high resolution devices.

Let's look at the fundamentals of RDP and then touch on the principles of Information Architecture (IA) and User Experience Management and see how these concepts apply to Enterprise Applications as well as the public facing high traffic web sites - domain where this concept first became relevant.

QUICK DEMO

Check out how Amazon.ca site looks in its mobile version on my tablet (See screenshot below)






Many times users are forced to install non-stock browser to fake User Agent string and get these sites to serve their desktop version instead.

Now let's take a look at another site that offers a striking alternative - Jyske Bank from Denmark. Here's how the site looks on my wife's iPhone:




Now browsing from an Android phone it will look like this:



Gradually increasing the resolution, the site will transform as these other screenshots below show:




And here's how the site looks at my laptop with full HD screen:



A lot of things are at play here and more visual elements are added as the screen real estate get larger. 
On the opposite, Amazon's full site relies on just one thing when the screen size goes down - scrolling (See screenshot below)




What a difference compared to Jyske Bank's smooth user experience. 

WHY GO RESPONSIVE?

Responsive Design is a flexible, fluid web design that suits the screen resolution and the  device in use. While modern smart phones and other mobile devices seem to be the most relevant application, supporting various screen resolutions across the enterprise is another strong case for using these principles. Not only this will provide a better fit on a smaller screen and eliminate excessive scrolling, it will also solve the inability of some gadgets to render heavy JavaScript-reliant content. 
Just a few years back majority of the sites were designed, reviewed and approved in Photoshop. Static images were developed for the major types of pages for a site and then they were cut into graphics and implemented in HTML. When mobile browsing started to become a noticeable trend, designers simply began to produce a second set of designs and that became their mobile sites. Now with  an ever growing array of devices and screen resolutions - this 'mobile vs. desktop' approach leaves too many users with less than optimal browsing experience. Desktop sites don't do well on most tablets, and most mobile sites don't look well on... pretty much anything.
Plus there's all that extra work required to maintain multiple versions of the site, so why not do one site that will work on every screen?

KEY PRINCIPLES

Lest look at the basic concepts at play at Responsive Design. Take a look at the following diagram



BROWSER AS THE REASON FOR THE DESIGN

As Jeremy Keith said it, “Stop Thinking in Pages. Start Thinking in Systems.” 
Instead of beginning with your graphical elements and thinking of how to make them look ok in different browsers, start with the needs of each of these different browsers. The mobile browser, the tablet or the desktop browser becomes the different viewports of your design. 
You'll also  need to take into account connection speed and the scripting capabilities of your clients. For instance, you may need to serve a different set of graphics to a mobile client who may be running on a slower connection, then those you use for your desktop clients. And simply hiding your larger graphics with CSS will not make the site load any faster, so some server-side work may be required to support your design.

‘MOBILE FIRST’ PRINCIPLE

According to a Marketing Land, as of November 2013, 28 percent (and growing) of all web traffic coming from mobile devices. That percentage is even higher in US and many other countries. 
What that tells us is the smaller screens are the thing of the future  and designing your site so that it fits the mobile device's screen first makes sense. The site needs to look good and perform with a limited JavaScript capabilities and may also  take advantage of touch capabilities on mobile, where popular mouse over events will not work.
Taking this optimal mobile browsing experience as a baseline, you can progressively add new elements, such as additional content and graphics - without affecting the accessibility of a basic site. 
This is as opposed to the design-degrading approach which requires the toning down of heavy content to suit a mobile interface - an approach, that may often result in degradation of the content. 

PROGRESSIVE ENHANCEMENT PRINCIPLE

Using the 'Mobile First' principle, a mobile interface becomes the starting point for your website. Additional features and content are then added as device capabilities increase.
This Progressive Enhancement formula presents a better alternative to the Graceful Degradation formula used in Adaptive Design. In the latter, one adjusts the complex features on a web interface, such as graphics, so that they can run on a mobile or a low resolution desktop client or slower network. As features are taken away, the appearance and functionality of the site get reduced, which results in sub-standard browsing experience.
This is why a progressive approach makes it perfect as it allows one to tweak content so that it is accessible on any device, without reducing its features. You upgrade it on a preferential basis, meaning only when the device supports that additional functionality.

GRACEFUL DEGRADATION PRINCIPLE

Graceful Degradation Principle is an opposite to the Progressive Enhancement. It starts with your full blown desktop site and it uses media queries (please see below) to resize, reposition, hide or replace heavy graphic or JavaScript driven elements with lighter versions. 
Media Queries web site (http://mediaqueri.es/) maintains an impressive array of RDP based sites but the site itself uses Adaptive patterns are graceful degradation. It simply resizes it’s graphics and repositions it’s header when the screen gets smaller - (See screenshot below) 






Unlike the Progressive Enhancement where you spend the time to make sure that even on the smallest screen, users receive full functionality and they get extras as they progress to a more capable devices, you are forced to take away features and functionality as you trying to degrade your design to allow it to run on smaller form factors. All of your thinking is then done on the desktop  part and your resulting mobile user experience may seem awkward and lack many important features. 

BREAK POINTS

Break Points are the designer-specified thresholds in screen size where elements are added, replaced or rearranged. In the case of Jyske Bank product shelf splits into two at 1000 px, feature image disappears at 850 px, search box is replaced by a search button at 750 px, top menu is replaced by a menu button at 550px, left feature on the top disappears at 500px and the product shelf is replaced by product buttons at 450px
Break points can equally apply with both Progressive Enhancement and Graceful Degradation principles

THE FLUID GRID

Using pixels as opposed to percentages to size your objects may look great on your desktop monitor but it sure will not scale well on a smaller screen and cause an awful lot of scrolling. 
The same goes for using tables for laying out your page. Using  DIVs and positioning them with CSS  is a much better alternative. As screen gets smaller, your DIVs will automatically 'flow' into a single column and will continue to be accessible and may even look good without requiring too much effort on your end and no need to reduce or enlarge them.

MEDIA QUERIES

Media Queries allow the page to adjust to the type of your output device (screen or printer) and its size. HTML4 and CSS2 currently support media-dependent style sheets tailored for different media types. For example, a document may hide it’s background image and use different fonts when printed. It may also switch to a lighter set of graphics when the screen size gets smaller.
Among the media features that can be used in media queries are ‘width’, ‘height’, and ‘color’. By using media queries, presentations can be tailored to a specific range of output devices without changing the content itself.
Here’s how your CSS might look:

@media screen and (max-width: 480px){ .header { background: url('img/bkg_small.jpg');} } 
@media screen and (min-width: 481px) and (max-width: 600px){ .header{ background: url('img/bkg_med.jpg');} } 
@media screen and (min-width: 601px){ .header{ background: url('img/bkg_lg.jpg');} }


CLIENT-SERVER INTERACTION

Server-side plays an important role in many RDP implementations. Hiding Heavy JavaScript content and switching to a different set of graphics based on the type of device cannot be accomplished on just the client side alone.
Now it’s not easy to obtain the client's screen size on the server - without using Ajax or Flash - but simply setting a cookie on your start page (and updating it when screen size or orientation has changed) may be a simple enough alternative.
And now let's take a look at specific details of implementing the RDP in your WebCenter - driven apps:

USING RDP WITH WEBCENTER

There's no out of the box support for RDP yet but with a bit of creativity most principles can be implemented with minimum effort.

BROWSER AS THE REASON FOR THE DESIGN

If you're using or planning to use WebCenter Sites - it comes with Mobility Server - a product that does it all for you - automatically formatting your site's content to fit the end user's device (See screenshot below)


Mobility Server offers in-context editing and preview of Sites for deployment to thousands of different mobile device types. You can set themes by device family and even access device's location to tailor your search results. Mobility also generates several different image sizes to optimize for different device types and for faster loading.

PROGRESSIVE ENHANCEMENT

If you're still relying on Site Studio or you're integrating directly with Content Server - you can still take advantage of Digital Asset Manager (DAM) component and Video Manager Functionality. DAM will automatically generate various resolution sets for all types of images and have them ready for use in your design, so you'll only have to check in one high resolution image for each set. 
It will also convert your videos to streaming format and even generate thumbnails that you can also use on your site.

GRACEFUL DEGRADATION

As I mentioned below, iDoc script makes it easy to set and retrieve cookies, so you can manipulate your page on the server side based on the 'screen size bracket' parameter that you set on the client. DAM is also a great tool to consider for generating your image renditions with various resolutions.

THE FLUID GRID

If you're developing the ADF - I recommend you checking out John Sim's article on developing fluid grids with ADF - he provides his own template here (http://cfour.fishbowlsolutions.com/2012/08/16/webcenter-portal-spaces-boilerplate-template-and-guide-to-responsive-design/)
John suggest going with a custom HTML template because the ADF is driven by custom tags with each tag's output controlled by the renderkit - the HTML that you should not really modify.
The Content Server and the Site Studio do not pose any limitations on any kind of RD solution you will be working on. Just be sure not to use Design Mode in Site Studio Designer as it will mess up your code.

Conclusion

That's it for now. If you're new to RDP - you should now feel a lot more comfortable, have some insights and ideas of how to go about applying Responsive Design Principles with Oracle WebCenter... and if you're all for it, but your Management is still not entirely convinced, this article should've given you a few good reasons for them to seriously consider doing it. 


Saturday, April 19, 2014

Dmitri's Collaborate 14 slides are here:


If you've attended any of my sessions last week at Collaborate or couldn't make it but would still like a peek - they are now ready for you at SlideShare:

Responsive Design and Information Architecture with Oracle Web Center Content - Introduction and Best Practices 

available here

Archiving In Content Management - A Deeper Look

available here ... and ...

Data Capture Market of 2014 - Navigating Competitive Landscape

available here

Tuesday, January 28, 2014

Join Dmitri at COLLABORATE 14 – IOUG Forum, April 7-11, 2014, Las Vegas, NV


Come see me speak at Collaborate 14 in Las Vegas in Apri. This year I'll be presenting three sessions:

  • 607 Archiving in Content Management - a deeper look
  • 606 Responsive Design - key principles for achieving peak user productivity, high scaleability and awesome response time of Web Center Content deployments and
  • 609 Data Capture - competitive landscape
I'll see you in Vegas!