Disable Follow-up in Smart Listening, Shorter Replies
Join CommunitySummary
This discussion on the CommunityOne Builders Help forum centers on improving the "Smart Listening" feature of the CommunityOne bot. Users suggested changes to make the bot's automated replies less intrusive, specifically by sending only a single initial response instead of continuous follow-ups to a user's subsequent messages within a set timeframe. The developer acknowledged the feedback, explaining the feature's original intent to answer unanswered questions, and proposed modifications includi...
suggestions from and
Just for the sake of clarity: Smart listening is meant to give new people a reply if they're asking something and nobody answers in the configured time.
Do you think the following changes would suffice:
a) ignore mods on smart listening
b) send a single message instead of followups for every other message
c) add a notice saying that users should @mention the bot if they want the bot to follow up immediately
So this way, once the bot answers someone it wont reply to every message from them in the next 10mins as it's configured to do rn. It will respond once and people can choose if they want the bot to keep replying immediately.
I think building an "intuitive" ai that decides when its best to intervene is quite hard in the sense that sometimes what you think is intuitive or useful is totally not what other people would want.
We initially built the smart listening feature bc we noticed there where people asking questions in some servers and then leaving the server because there was nobody there to answer their questions.
So we do question detection using AI and then wait for x minutes, if nobody answered then the ai looks at all the messages since those 10mins started and does it's own reply
and after that users would continue talking to the ai and it would be another x minutes until the ai answered, thats why the bot auto-follows up on subsequent messages by that given user for a while.
But I guess this instant followup ended up being a bit intrusive.
I am going to look into the nickname issue. I think solution will be to remove the field and just use whatever the nickname for the bot in the server is.
I was thinking that if you give the bot a nickname (visible name for this given server) then the bot should automatically recognize this and make it part of its personality.
So we wouldn't really need a config input, as you would only need to give it a nickname in discord.
If you have pictures of this behaviour that would be super useful as I will be working on this next!
I see, don't worry.
I think that one is understandable given the phrasing, I will be pushing this update later today so this shouldn't be an issue for long
thank you! we want to keep improving CommunityOne bot too
Potentially, if doing transcription+voice becomes cheap enough. What do you mean dispatch services?
so for the last bit
"When asked about the server, donโt mention Dovanguard; instead, talk directly about the โBattle for Azeroth [No Wipes]โ project unless the person specifically asks about the Discord server. When discussing โBattle for Azeroth,โ analyze all the information you have and share everything you can on the topic in a maximally detailed and expansive response.
"
You can put it into our CTA. I think that might help, we use a different mechanism to check conditions
i think the bot inheritantly wants to respond in shorter sentence, since this is how human talks on Discord, its possible that its not responding in long setnence is because we have some built in methchanism to make the respond short...
do u want the bot to be normal..or sort of have a mean tine
tone
Oh having a setting specific for personality sounds like a great idea to me!
We do have some guidelines in the "guardrail" prompt we inject along your custom prompt that may be biasing it's personality, we could potentially add a setting that overrides those.
i think what we should do is to actually allow long convo mode, or short convo mode....
Another interesting feedbak i am hearing a couple times is that everyone want bots to be more human..although i am not sure how to make the bots "more human"
hmmm, it's possibly 50%prompt 50% model choice
yep ... the ifxdoy feature
okay...yeah.. thanks for sharing the thoughts, its cool to point out the pattern....i think this might be somthing that we can potentially expose to users for configuration of "personality"
its possible the reason that "servility" is an issue is that we actually have it in our prompt to make it sound like this...
just give u the cool member role
try again
https://tenor.com/view/harrry-potter-professor-mc-gonagall-maggie-smith-excited-ooh-gif-7910234
can you check if our links work in product-feedback channel?
well.. can we make it work..
It is in our roadmap!
In the meantime we will also revise our personality prompt to make sure its toned down
just a bit on timeline, we should be able to put a oneweek sprint for a comprehensive revamp on Spark in about 4-5 weeks
its on our list now!
this will ofc include new stuff, but the prompt update will come sooner as it's much more simple to do
There's a few updates comming soonโข (hopefully in a few hours), we're updating our ai models and also revising part of our prompt to avoid the issues with too much praise, jokes and greeting (something both of you had in common in your prompts actually)
It'll be shown in #๐ฃใปchange-log when its out
Just as Louisa mentioned there is a full week of development we'll dedicate to our chatbot feature coming soon
We want to offer distinct options for personality and writing style that are intuitive and easy to use.
For personality and writing style inputs we want to offer presets where you can select one and see its prompt for reference and edit it as you see fit (and ofc the option to use no preset)
With updated prompt and the newly updated models you may find that they follow instructions a bit better too, but this was something we could slip into our development schedule, we are definitely upgrading the overall chatbot configuration capabilities/experience in the nearish future