KnowBrainer Speech Recognition
Decrease font size
Increase font size
Topic Title: Future of Australian DMPE 4.3
Topic Summary: Australian version of DMPE will not be supported from this year
Created On: 10/25/2020 10:28 PM
Status: Post and Reply
Linear : Threading : Single : Branch
 Future of Australian DMPE 4.3   - 323386 - 10/25/2020 10:28 PM  
 Future of Australian DMPE 4.3   - Mav - 10/26/2020 02:35 AM  
 Future of Australian DMPE 4.3   - ericnixmd - 10/26/2020 03:32 PM  
 Future of Australian DMPE 4.3   - MDH - 10/29/2020 05:17 PM  
 Future of Australian DMPE 4.3   - Mav - 10/30/2020 03:28 AM  
 Future of Australian DMPE 4.3   - MDH - 10/30/2020 11:16 AM  
Keyword
 10/25/2020 10:28 PM
User is offline View Users Profile Print this message

Author Icon
323386
New Member

Posts: 1
Joined: 10/25/2020

Just a quick comment to inform other users and potential users.

I have just recently returned to Australia after living in North America for a few years.  Given the spelling differences in Australia to North America (colour v color etc.) have gone to purchase a local version of DMPE 4.3.

I have been just told by the local vendor that DMPE will not be sold from the new year with the push towards Dagon Medical One rather than stand alone versions.  

Having used DMO and DMPE this causes some problems:

1. This is going to be disappointing given the power of scripting which is available on DMPE which is not available on DMO.  

2. Further, these is no ability to incorporate a dictaphone workflow which is a problem when working in hospitals in Australia which are still predominantly paper based.

3. There is no ability to dictate and have a typist listen and check against recording.

I don't know if this is just a cost saving measure in local versions of DMPE or if it will be a push towards limting the stand alone version overall. (eg. the Australian legal version and Dragon dictate on Mac)

As an aside there is one curious thing in the current Australian version is the licence is per user with unlimited installations rather than the traditional 5 installations only.  

Jd

 10/26/2020 02:35 AM
User is offline View Users Profile Print this message

Author Icon
Mav
Top-Tier Member

Posts: 320
Joined: 10/02/2008

Hi!

That's a general trend with Nuance, starting with the medical version and already leaking into the professional and legal area (with Dragon Professional/Legal Anywhere).

Nuance is pushing DMO very hard here in Europe and, unfortunately, is quite reluctant in bringing it up to par with DMPE.

 

It doesn't look as if the scripting part will be addressed anytime soon, but there's been some progress with deferred correction.

While I can't see DMD/DMO incorporating deferred correction (since there's no way to tag a piece of dictation with an "ID" so that the transcriptionist can find it again later on), the most recent SpeechKit SDK version (that's basically the SDK for integrating the same speech recognition DMD/DMO uses into your applications) does include support for deferred correction (at the moment not for on-premise systems and not for US servers).

So chances are that deferred correction can be integrated into clinical systems once they integrate SpeechKit.

 

Regarding the transcription of audio recordings, this "deferred recognition" has been dropped from Nuance's plan at the moment.

If you really rely on this workflow, this should be brought to your Nuance representative's attention, since they only implement anything new if the business case behind it is large enough.

In the meantime you could install a virtual audio device (there are some out there, e.g. google for "virtual audio cable") where you can shortcut audio output from an audio player to the "microphone" used by DMO and thus transcribe your recordings in real-time by playing them back in an audio player of your choice.

 

HTH,

mav

 10/26/2020 03:32 PM
User is offline View Users Profile Print this message

Author Icon
ericnixmd
Top-Tier Member

Posts: 304
Joined: 07/25/2017

If I'm forced to use an online speech recognition program, then I will choose M*Modal Fluency Direct. It's cheaper and seems to work better.

I tried DMO for a few months and it was absolutely horrible.  I would rather type my text than use it.



 10/29/2020 05:17 PM
User is offline View Users Profile Print this message

Author Icon
MDH
Top-Tier Member

Posts: 2216
Joined: 04/02/2008

"If I'm forced to use an online speech recognition program, then I will choose M*Modal Fluency Direct. It's cheaper and seems to work better."

 

Keep in mind though that the command functionality of MMODAL's Fluency Direct is PRIMITIVE compared to Dragon's.

 

MDH



-------------------------
 10/30/2020 03:28 AM
User is offline View Users Profile Print this message

Author Icon
Mav
Top-Tier Member

Posts: 320
Joined: 10/02/2008

Originally posted by: MDH  

Keep in mind though that the command functionality of MMODAL's Fluency Direct is PRIMITIVE compared to Dragon's.

At least from their documentation (haven't had first level experience, though), you can write scripts in JScript or VBScript and use many of FD's functions from your script.

 

There even seems to be a Dragon compatibility layer for transferring Dragon scripts to M*Modal.

 

I'm getting more and more impressed...

 

mav

 

 10/30/2020 11:16 AM
User is offline View Users Profile Print this message

Author Icon
MDH
Top-Tier Member

Posts: 2216
Joined: 04/02/2008

Yes, but I have had first-hand experience using MMODAL Fluency Direct. The actual dictation recognition/accuracy is quite good. It is at least as good as Dragon's, and maybe even slightly more accurate. It generally doesn't need as much TLC as Dragon. So for 95% of docs, it would be he preferable way to go. However, for those that use custom commands, best of luck. Although it is true that one can import Dragon Advanced Scripting commands into MMODAL, one should presume that it mostly works. In fact, I helped them get the import process from semi-working to mostly working a few years ago. That said, getting a command to import does not necessarily translate into it working quickly or at all. For multi-step custom commands, the commands frequently "time-out" and abort part-way through carrying out all of the steps. The actual process of each step happening is very slow compared to Dragon's. Additionally, the command recognition from calling the command is awful. This is probably the worst problem, which for me was the difficulty that made me abandon wasting any more time using custom commands, and lead to my decision that MMODAL Fluency Direct was not a viable option for me. Workflow and efficiency were severely impaired The built-in (non-custom commands) work fine. However, we need to keep in mind that those people on this forum are clearly a tiny minority of users of speech recognition. For 95% of docs that "turn on the microphone, talk, turn off the microphone (possibly skipping one of these steps)" and occasionally use a built-in command, it is probably the preferred program. For anyone using custom commands to be truly efficient and as hands-free as possible, Dragon is the way to go.

 

MDH



-------------------------
KnowBrainer Speech Recognition » DMPE, DMO and EMRs » Future of Australian DMPE 4.3

Statistics
31940 users are registered to the KnowBrainer Speech Recognition forum.
There are currently 0 users logged in.
The most users ever online was 12124 on 09/09/2020 at 04:59 AM.
There are currently 459 guests browsing this forum, which makes a total of 459 users using this forum.

FuseTalk Standard Edition v4.0 - © 1999-2021 FuseTalk™ Inc. All rights reserved.