I tried it. The duxelle mix blended really well. I used about 50/50 cuz that was about two packs of mushrooms. It was really rich. The red wine taste didn’t really come through. The duxelle flavors went well with the feta
I tried it. The duxelle mix blended really well. I used about 50/50 cuz that was about two packs of mushrooms. It was really rich. The red wine taste didn’t really come through. The duxelle flavors went well with the feta
I haven’t tried it yet, I’m waiting until the ingredients go on sale
That’s kind of the point and how’s it different than a human. A human is going to weight local/recent contextual information as much more relevant to the conversation because they’re actively learning and storing the information (our brains work on more of an associative memory basis than temporal). However, with our current models it’s simulated by decaying weights over the data stream. So when you get conflicts between contextual correct vs “global” correct output, global has a tendency to win out that is more obvious. Remember you can’t actually make changes to the model as a user without active learning. Thus the model will always eventually return to it’s original behaviour as long as you can fill up the memory.
I’m trying to tell you limited context is a feature not a bug, even other bots do the same thing like Replika. Even when all past data is stored serverside and available, it won’t matter because you need to reduce the weighting or you prevent significant change in output values (and less change as the history grows larger). Time decay of information is important to making these systems useful.
Everyone rushing to over-supply the market was not a lack of demand. Legal weed is probably the largest new market in the last 5 years! Saying the company is underperforming because of a lack of demand when people are buying from other companies is such a hollow excuse.
Oh boy you really need to read up on current events and history if you think Canada is some kind of femenist utopia
The cannabis industry has been hampered by a lack of demand…
What? That I do not believe
I hear this from Americans a lot, here everything is pretty much online nowadays (although a friend of mine had her identity stolen so she has to get in person which is her biggest complaint about the whole thing)
At that point why not just embed the gps tags in the ear tags that we already put on cows? Or why can’t we just spray paint their butts like sheep? (Which I’m saying as a person that really knows nothing about this but if it works for the sheep…)
Then the thieves are just going to cut off all the cows’ ears!
The problem isn’t the memory capacity, even thought the LLM can store the information, it’s about prioritization/weighting. For example, if I tell chatgpt not to include a word (for example apple) in it’s responses then ask it some questions then ask it a question about what are popular fruit-based pies then it will tend to pick the “better” answer of including apple pie rather than the rule I gave it a while ago about not using the word apple. We do want decaying weights on memory because most of the time old information isn’t as relevant but it’s one of those things that needs optimization. Imo I think we’re going to get to the point where the optimal parameters for maximizing “usefullness” to the average user is different enough from what’s needed to pass someone intentionally testing the AI. Mostly bc we know from other AI (like Siri) that people don’t actually need that much context saved to find them helpful
You don’t get to complain about people being condescending to you when you are going around literally copy and pasting wikipedia. Also you’re not right, major progress in this field started in the 80s although the concepts were published earlier, they were basically ignored by researchers. You’re making it sound like the NNs we’re using now are the same as the 60s when in reality our architectures and just even how we approach the problem have changed significantly. It’s not until the 90s-00s that we started getting decent results that could even match older ML techniques like SVM or kNN.
I’m surprised no one has said london drugs https://www.londondrugs.com/about-london-drugs/about-us.html that is the closest to amazon imo. shoppersdrugmart.ca and fortinos.ca are owned by Loblaws and have an expanded marketplace selection.
Costco.ca is an american company but publicly traded. Well.ca used to be canadian but is now owned by McKesson Corporation which is American and publicly traded.
You could try to buy through etsy or look for canadian shopify stores like bee kinds https://beekindwraps.ca/ ( here is a complete list ) which are more specific sites. You can often find the same products on marketplace sites but the merchant will pay less fees if you go through their own site so try searching products too
The Toronto police had that scandal with the bar in their HQ so I think that eliminates them as a contender
Last time I talked about this with the other TAs, we ended up coming to the conclusion that most papers that were decent were close to the max word count or above it (I don’t think the students were really treating it as a max, more like a target). Like 50% of the word count really wasn’t enough to actually complete the assignment
The idea of NN or the basis itself is not AI. If you had actual read D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation.” Sep. 01, 1985. then you would understand this bc that paper is about a machine learning technique not AI. If you had done your research properly instead of just reading wikipedia, then you would have also come across autoassociative memory which is the precursor to autoencoders and generative autoencoders which is the foundation of a lot of what we now think of as AI models. H. Abdi, “A Generalized Approach For Connectionist Auto-Associative Memories: Interpretation, Implication Illustration For Face Processing,” in In J. Demongeot (Ed.) Artificial, University Press, 1988, pp. 151–164.
Over-enthusiatic english teachers… and skynet (cue dramatic music)
Hey, don’t go giving Big Brother any good ideas now (especially ones that would work on me)
Not the specific models unless I’ve been missing out on some key papers. The 90s models were a lot smaller. A “deep” NN used to be 3 or more layers and that’s nothing today. Data is a huge component too
50% of people earn less than the average hourly wage