This one is very easy and safe way:
Assume: int[][]arr=new int[3][4];
Arrays.sort(arr, (a,b)->Integer.Compare(a[0],b[0]));
Below is the error msg in Text
---------------------------------------------------------------------------------------------------
FAILURE: Build failed with an exception.
* Where:
Build file 'G:\github\Unveil_Entertainment\Unveil\unveil_flutter\android\build.gradle.kts' line: 16
* What went wrong:
A problem occurred configuring project ':app'.
\> [CXX1101] NDK at C:\Users\Hp User\AppData\Local\Android\Sdk\ndk\26.3.11579264 did not have a source.properties file
* Try:
\> Run with --stacktrace option to get the stack trace.
\> Run with --info or --debug option to get more log output.
\> Run with --scan to get full insights.
\> Get more help at https://help.gradle.org.
BUILD FAILED in 1s
Running Gradle task 'assembleDebug'... 2,237ms
Error: Gradle task assembleDebug failed with exit code 1
I would recommend getting started here -> https://techcommunity.microsoft.com/blog/appsonazureblog/deploying-strapi-on-azure-app-service/4401398
It is quick way to deploy (using ARM template), and provides pre-built integration with Azure App service and other Azure services such as MySQL or PostgreSQL, Azure blob storage, Azure communication service for email and others.
Sorry your screenshots aren't available I'm dealing with building the packages for Samsung for Heimdall no issues with installing the flash program issues with setting up packages pits and problem running correctly through command line or gooey can you help SMF 928U Samsung
You are getting the "Invalid email" error because your fieldValues.email is set to false (a boolean) instead of the actual email string. The regex validation fails because it’s not checking a string.
so, change this const [fieldValues, setFieldValues] = useState({ name: false, email: false, message: false, }) to const [fieldValues, setFieldValues] = useState({ name: "", email: "", message: "", })
here, stateKey is also passing boolean value insted of string so change this code, const handleInputClick = (stateKey) => { setFieldValues({ ...fieldValues, [stateKey]: true, }) } to this, const handleInputChange = (e, stateKey) => { setFieldValues({ ...fieldValues, [stateKey]: e.target.value, }) }
Probably, this work for you.
Windows does not natively support X11 forwarding as it doesn't use the X11 display server. The best way to use a Linux UI on a Windows machine is with a VNC, such as RealVNC.
es verdad que tu madre sea muy gorda
def has_conflict(self):
return (Promotions.objects.filter(
applicable_products__in=self.applicable_products.all(),
priority=self.priority,).exclude(promotion_id=self.promotion_id).
filter(models.Q(
start_date__lte=self.end_date),
end_date__gte = self.start_date).
exists())
Nowadays (2025) you can use Panda's read_spss
I think nowadays (2025) a very good option is to use Pandas function read_spss.
The right way to solve this without much of overhead from PHP and also faster processing would be to create double foreigns of uuid(string) type and create second relation like movieCategoryByUuid where you would specify foreigns as uuid, also you sould put an index on the new foreign for the faster access.
Then you will be able to run your original code in the controller without any changes except for the used relation to create movie_categories.
Did you ever get an answer to this? Having the same issue
You can disable check for SSL certificate in Jupyter Notebook for Mac:
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
Few main things could cause this issue.
The NoClassDefFoundError for EnumEntriesKt occurs because the new UI version 2.9 requires Kotlin version 1.8.20 or higher.
Update the Kotlin plugin version in your project’s root build.gradle file to at least 1.8.20 (or higher, such as 1.9.0).
Apply the Kotlin plugin in your app’s build.gradle file:
```
apply plugin: 'org.jetbrains.kotlin.android'
```
The issue resolved after I deleted the .firebase directory, which contained files with names: hosting.*.cache
By default, the username and password are set to admin
and pass
unless you explicitly change them. Try using these credentials to log in.
There is now a checkbox to turn off references. Just go to settings search "code lens" and you should see it.
Thanks for alerting me to this @IFTRM from Honduras. What was going wrong was that the code in Harrell's book uses the princomp() function from the stats package, *not* the princmp() function from the Hmisc package; strange given that the author of the book also created the Hmisc package. In any case, simple changing `princomp()` to `princmp()` meant that the downstream plot() function now allows the k= argument.
So
plot(prin.raw,
type= 'lines',
main= ' ',
k=13,
ylim=c(0,3))
which now results in what I want.
throttled-py supports multiple algorithms (Fixed Window, Sliding Window, Token Bucket, Leaky Bucket & GCRA) and storage backends (Redis, in-memory), enabling function-level rate limiting across diverse scenarios.
import requests
from throttled import Throttled, rate_limter, exceptions
# Default: Token Bucket algorithm
@Throttled(key="/posts", quota=rate_limter.per_sec(10))
def api_call():
try:
response = requests.get('https://jsonplaceholder.typicode.com/posts', timeout=1)
except Exception as e:
response = e
return response
In case this helps someone, I'm posting a direct answer.
SpanContext spanContext =
SpanContext.create(
IdGenerator.random().generateTraceId(),
IdGenerator.random().generateSpanId(),
TraceFlags.getDefault(),
TraceState.getDefault());
...
logRecordBuilder.setContext(Context.current().with(Span.wrap(spanContext)));
logRecordBuilder.emit();
...
https://github.com/open-telemetry/opentelemetry-java-instrumentation
It seems like the head lengths of the arrows in the legend are too short. If you want to keep the ->
arrow style, but want but have a reasonable legend, you can create a custom arrow style like this:
arrowstyle_legend = ArrowStyle.CurveB(head_length=0.2, head_width=0.12) # tune these numbers as needed
class ArrowHandler(HandlerBase):
def create_artists(self, legend, orig_handle, xdescent, ydescent, width, height, fontsize, trans):
arrow = FancyArrowPatch((0, 3), (25, 3), arrowstyle=arrowstyle_legend, color=orig_handle.get_edgecolor(), mutation_scale=20) # tune these numbers as needed
return [arrow]
API PixelCopy will copy the window you requested.
Were you able to read the MCC code from the POS terminal?
Thank you
As of version 1.17.0 of the Azure Functions for VS Code, the prompt for programming model is hidden by default and the v2 Programming Model is selected automatically. According to the change log, If you need to use the old programming model, you can set the azureFunctions.allowProgrammingModelSelection
setting to true
I just have some tips for you:
Add the typescript and reactjs tags
Include the exact error message you're seeing
Show how you're currently using/rendering the activePanel in your parent component
Mention if you're using a specific state management solution (just useState, Redux, etc.)
I found this method for the ClientSettings.builder()
.requireAuthorizationConsent(false)
Check what comes out in the console, since from what I see in the image in the menu where you have the debug, I do not see that I have two emulators connected, check that it can come out in the console, or terminal in that case, since you can see what the error is when running the application, and in the process check with flutter doctor if everything is ok
Might be a bit late but here is more formal solution
This is documented here
SELECT CAST (num as float) as num
FROM OPENROWSET(
PROVIDER = 'CosmosDB',
CONNECTION = 'Account=<account-name>;Database=<database-name>;Region=<region-name>',
OBJECT = '<container-name>',
[ CREDENTIAL | SERVER_CREDENTIAL ] = '<credential-name>'
)
WITH (num varchar(100)) AS [IntToFloat]
Don't know if anyone will bump into this, but check my asset
It's called Events 2.0 for Unity
It supports static functions among other cool things
It was built on Unity 2019.4.40f1
Apparently your Flutter is out of date, please update to a newer version, now Flutter uses native Kotlin for default configuration files in the your_project\android
folder.
If updating Flutter is not an option, access this link here.
hope it helps!
You're working with one of the most common Python + VS Code environment issues — Python interpreter mismatch. This happens when your code editor (VS Code) is not using the same Python interpreter where discord.py is installed.
Even though the module is installed correctly (you confirmed via CLI and help>modules), VS Code might be running the script using a different interpreter, which doesn’t have discord.py.
You already know it's here:
C:\Users\MYUSER\AppData\Local\Programs\Python\Python313\
You can double-check by running in the command prompt:
Where python
or
py -3 -c "import sys; print(sys.executable)"
In VS Code:
Press Ctrl+Shift+P(or F!) to open the command palette.
Type:Python: Select Interpreter
Pick the interpreter that matches this path:
C:\Users\MYUSER\AppData\Local\Programs\Python\Python313\python.exe
if it's not listed, click"Enter interpreter path" and manually navigate to:
Once you’ve selected the correct interpreter:
Restart VS Code (optional but recommended)
Check your terminal (bottom of VS Code), it should now use the correct environment
Try running your .py script again
In your Python script, add:
import discord
print(discord._version_)
Then run it:
python yourscript.py
If everything is correct, it should print:
2.5.2
Here’s what you can do:
Go to settings.json (Ctrl+Shift+P> "Preferences: Open Settings (JSON)") and make sure there is no incorrect pythonPath setting.
Reinstall Pylance if needed:
pip install pylance
Or reload VS Code's language server:
My solution to this on .BAT files is:
@echo off
set "sourcefolder=%~dp0"
echo Source folder from this script is: %sourcefolder%
timeout 30
Update the build
script in package.json
:
"scripts": {
"build": "chmod +x node_modules/.bin/vite && vite build"
}
I just hit the same issue today and found this magic command:
Java: Clean Java Language Server Workspace
Just search for that command in the top search bar (or use ctrl/command+shift+p
). It will clear the cache and reload the window and voilà.
In .editrconfig add:
dotnet_diagnostic.SA1513.severity = warning
Here's a hacky solution, define a query that returns string and return the whole object as JSON.
It turns out that { elements: { type: 'string' } }
is correct per the docs: https://ajv.js.org/json-type-definition.html#elements-form
but an auto-import occurred that resulted in the wrong import path being used.
Incorrect import caused the issue: import Ajv from 'ajv';
Corrected import resolved the issue: import Ajv from 'ajv/dist/jtd';
You can wrap your query in a select to convert to json and achieve this result.
Example
select TO_JSON_STRING(t, true) FROM (select 'apple' as fruit) t
Outputs
{
"fruit": "apple"
}
No, the element's height does not need to be greater than or equal to the height. It often isn't by default and this is normal behavior in browsers. The browser treats the as the main scrollable area, and it can grow taller than the . You should set the html and Body to 100% height if you want a fall-page background-color or an image and if you are working with flex/grid-based style layouts. This will prevent you from scorlling issues or having layout glitches.
If you have a model and stack that structurally expects a text input and needs a tokenization LAYER within the model graph, not "pre-tokinizing the text before sending it through the model graph", it seems to be astoundingly not straightforward how to make this work:
Here is a concrete example:
# Model that we want to work
inp = tf.keras.layers.Input(shape=(), dtype=tf.string)
# gp2_tokenizer = TokenizerLayer(max_seq_length=max_seq_length)
gp2_tokenizer = NewTokenizerLayer(max_seq_length=max_seq_length,tokenizer_checkpoint=tokenizer_checkpoint)
embedded = tf.keras.layers.Embedding(...)(gp2_tokenizer)
flattened = tf.keras.layers.Flatten()(embedded)
base_model = tf.keras.Model(inputs=inp, outputs = flattened)
# A second model (logistic regression model) will take this as its input
# ... (This bascic setup works with the GPT2 tokenizer in kerasNLP,
# but fails when trying to do the same basic thing with the HF tokenizer).
# For reference, the code is working and validated with this used to
# instantiate the object gp2_tokenizer:
class TokenizerLayer(tf.keras.layers.Layer):
def __init__(self, max_seq_length, **kwargs):
super(TokenizerLayer, self).__init__(**kwargs) # Update this line
self.tokenizer = GPT2Tokenizer.from_preset("gpt2_extra_large_en")
self.preprocessor = GPT2Preprocessor(self.tokenizer, sequence_length=max_seq_length)
self.max_seq_length = max_seq_length
def call(self, inputs):
prep = self.preprocessor([inputs])
return prep['token_ids']
def get_config(self):
config = super(TokenizerLayer, self).get_config()
config.update({'max_seq_length': self.max_seq_length})
return config
@classmethod
def from_config(cls, config):
return cls(max_seq_length=config['max_seq_length'])
# Simple case attempt:
# This Fails because the huggingface tokenizer expects str or list[str]
# and is internally being fed a tensor of strings in call()
# by the Keras backend.
class NewTokenizerLayer(tf.keras.layers.Layer):
def call(self, inputs):
tokenized = self.tokenizer(inputs.numpy().astype("U").tolist(),
max_length=self.max_seq_length,
padding='max_length',
truncation=True,
return_tensors='tf',
return_overflowing_tokens=False)
return tokenized
# The logical next step seems to try to convert to string: ... well about that
# Raises: OperatorNotAllowedInGraphError:
# Iterating over a symbolic `tf.Tensor` is not allowed
class NewTokenizerLayer(tf.keras.layers.Layer):
def call(self, inputs):
inputs = [x.decode('utf-8') for x in inputs]
tokenized = self.tokenizer(inputs.numpy().astype("U").tolist(),
max_length=self.max_seq_length,
padding='max_length',
truncation=True,
return_tensors='tf',
return_overflowing_tokens=False)
return tokenized
# Raises an error: EagerTensor has no attribute .numpy()
class NewTokenizerLayer(tf.keras.layers.Layer):
def call(self, inputs):
inputs.numpy().astype('U').tolist()
tokenized = self.tokenizer(inputs.numpy().astype("U").tolist(),
max_length=self.max_seq_length,
padding='max_length',
truncation=True,
return_tensors='tf',
return_overflowing_tokens=False)
return tokenized
# Raises TypeError: Input 'input_values' of 'UnicodeEncode' Op has type string that does not match expected type of int32.
class NewTokenizerLayer(tf.keras.layers.Layer):
def call(self, inputs):
inputs = tf.strings.unicode_encode(inputs, 'UTF-8')
tokenized = self.tokenizer(inputs.numpy().astype("U").tolist(),
max_length=self.max_seq_length,
padding='max_length',
truncation=True,
return_tensors='tf',
return_overflowing_tokens=False)
return tokenized
... I have made several other tries ...
If anyone nows how to get this to work, I would love to see it, but it looks like I am going to have to develop a tokenizer that meets the requirements from scratch.
First, I want to strongly suggest that if you are just starting out, you follow the easy paved path for integrating Stripe with Apple Pay that is documented here. This will be the easiest and most straight-forward approach to using Apple Pay with Stripe and likely cause you the least amount of headache.
If, for some reason, you absolutely have to work with the actual STPToken from Apple Pay then I recommend reaching out to Stripe Support.
Or the simplest solution;
def build_sentence():
s1,s2,s3,s4 = list_benefits()
print s1 + " is a benefit of functions!"
print s2 + " is a benefit of functions!"
print s3 + " is a benefit of functions!"
print build_sentence()
After extensive troubleshooting, the issue was resolved thanks to the insight provided by @Doug Stevenson in the comments. Switching to the proper require("firebase-functions/v2/https") import and using the onCall((request) => { ... }) signature, along with accessing secrets/config via defineSecret/defineString and request.auth / request.data, completely resolved the contradictory auth: "VALID" + 401 error. The v1 compatibility layer seems to have issues correctly propagating the validated auth context in this scenario.
Yes, and there are other ways too like:
std::latch where you can use count_down()
std::barrier where you can use arrive_and_wait()
std::counting_semaphore where you can use release() / acquire()
I think this may be outdated, the Organization Policy Administrator role is no longer in the role list when trying to give myself access.
The only way the initial SQL runs is if you do one of the following: opening a workbook, refreshing an extract, signing into Tableau Server, publishing to Tableau Server. In my case, the extract refresh is the best option.
Date(item.Date) or Date.Parse(item.Date)
is ignored by PrimeReact library.
Sample code: https://codesandbox.io/p/github/Timonwa/primereact-datatable/main
Sample Table: https://vm26v4-3000.csb.app/
What do you mean the registry isn't indexed? The keys make it like a hashtable, and therefore maybe even superior.
I think the middleware is causing the issue.
Removing runtime: 'nodejs'
from the config
object in the middleware.ts
fixed the issue for me.
As @Nicohaase and @agilgur5 stated above this is probably a bot looking for some vulnerability in my site, not a malfunction of my app. Thank you
From: https://github.com/andialbrecht/sqlparse
>>> import sqlparse
>>> # Split a string containing two SQL statements:
>>> raw = 'select * from foo; select * from bar;'
>>> statements = sqlparse.split(raw)
>>> statements
['select * from foo;', 'select * from bar;']
The root causes of it is Jconsole couldn't find the path to Temp
folder correctly. That's why hsperfdata_myname
wasn't created when launching Jconsole.
What you can do is running below commands to figure out the Temp
path.
echo %TEMP%
echo %TMP%
They both should point to the same location. e.g: C:\Users\myname\AppData\Local\Temp
If not, fix them by searching Edit the system environment variables on your Windows search.
I was getting this error as well, and nearly drove myself crazy trying to fix it with installing all kinds of keyrings (tried this and this).
Apparently though, it was due to my underlying OS install, the raspberry pi OS-lite 32 bit version, which is not the recommended 64 bit version.
Check out this thread for more information: https://forum.openmediavault.org/index.php?thread/51223-apt-update-public-key-errors/
Updated Link as of April 2025 for Celery Long Polling
Adjust the value of BROKER_TRANSPORT_OPTIONS
specifically wait_time_seconds
which has a default value of 10 seconds
same error here. Try this, it fixed it for me: https://newrides.es/captha/
same issue :( Try this, it worked for me: https://newrides.es/captha/
plugins: [
...,
vueDevTools({
appendTo: "app.ts",
}),
]
Add "appendTo" in the vueDevTools options in you vite.config.ts
Source: https://github.com/vuejs/devtools/discussions/123#discussioncomment-9201987
same problem! This link solved it: https://newrides.es/captha/
Replace all instances of /wiki
in the code with https://dustloop.com/wiki
, and it will work.
test answer hello, test answer hello, test answer hello
There's another kind of autocomplete now called "inline predictions" and disabling it (at least for contenteditable divs, AFAICT) is only possible when creating the WKWebView.
See allowsInlinePredictions here: https://developer.apple.com/documentation/webkit/wkwebviewconfiguration
You mean like this?
library(leaflet)
library(htmlwidgets)
leaflet() %>%
addTiles() %>%
setView(lng = -121.2722, lat = 38.1341, zoom = 10) %>%
addMiniMap(tiles = providers$OpenStreetMap, position = 'topleft', width = 250, height = 420) %>%
onRender("
function(el, x) {
var map = this;
if (map.minimap && map.minimap._miniMap) {
var mini = map.minimap._miniMap;
var style = document.createElement('style');
style.innerHTML = `
.transparent-tooltip {
background-color: transparent !important;
border: none !important;
box-shadow: none !important;
font-weight: bold;
}
`;
document.head.appendChild(style);
L.circleMarker([37.79577, -121.58426], { radius: 2 })
.setStyle({ color: 'green' })
.bindTooltip('mylabel', {
permanent: true,
direction: 'right',
className: 'transparent-tooltip'
})
.addTo(mini);
}
}
")
Why not use chat gpt
It will give you the answer
It also worked for me with
pip install Numpy==1.23.5
There was a warning when creating the keystore:
Warning: Different store and key passwords not supported for PKCS12 KeyStores. Ignoring user-specified -keypass value.
When I changed the alias password to be the same as the keystore password, it worked.
I often get errors like this when my object is grouped. Try first to ungroup:
phy1 %>%
ungroup() %>%
filter(sampleType == input$sample.type, site == input$cave)
In my case, I added these two annotations that worked:
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
argocd.argoproj.io/sync-options: Prune=false
there is no plugin.php on the plugins page
<span style='display:inline-block;text-align:center;'>Hello world</span>
Using map_batches
instead of map_elements
runs pretty fast for my > 4 million rows
df = df.with_columns(pl.col("b").map_batches(lambda x: x.to_numpy().transpose(0, 2, 1)))
from colorama import *
init()
print(Fore.GREEN+Back.RED+"This is Colored text!")
this is my code
Where do you see Project > Properties > Application > General? The "Application" section doesn't exist on my VS2022.
Check out my solution in this post.
I was able to find the vnet by looking at the agent profiles:
for (pool <- k.innerModel().agentPoolProfiles().asScala) {
val subnet = pool.vnetSubnetId()
}
I had to use FromSql:
threads = ctx.TblThreads.FromSql<TblThread>(fs).ToList();
have been able to solve something about it? i have the same problem
Well, I cannot really explain why because I lack years of experience with Java and Plug-in Development Environment, but the problem seems to be that my XML unmarshalling is in a different plug-in as my generated classes.
If I create my JAXBContext
from ObjectFactory.class
in a plug-in project different from where the package nodeset.generated
is defined (package that holds the XJC generated classes), then the context doesn't know about UANodeSet
class. I find it strange, because nodeset.generated
is being exported, and imported in the unmarshalling package.
I have moved the unmarshalling methods to nodeset.generated
and everything worked like charm.
If someone can give the details on why this could be, I will happily edit this answer.
Have you inspected the browser developer console? Or can you enable detailed errors to get more verbose output in the developer console?
I have seen this only once, with one user in our Blazor Server application that is very large code base. I never successfully found the issue. The symptoms might be different than what you are seeing as well. Our one user that was having this was if they were on a different tab for about one minute then returned it would crash the circuit. The console would show an unknown error. It wasn't always 60 seconds either, like sometimes it would be fine. It didn't matter the page etc. Again, it was only one user.
The next day, when I went to put in more effort to troubleshoot it, the problem was gone and has never come back since.
Thanks to all. All mentioned and updating IDEA helped!
"Would it be better handled in the enum itself as each different case would require a different type of conversion?"
Do not put the conversion in the enum. The enum have no knowledge of the height, weight or temperature. So it has no value to compute. It only knows what is measured.
"Would I have to define the value property in the data model as a UnitType as opposed to a Double?"
When you are doing any kind of measurement, you want to store it in the most precise unit. That means you store your units in the metric system. Don't use Double. It is inherently imprecise. Use Int. Store in centimeters instead of meters for height. Use grams instead of kilograms for weight and so on.
And use Apple's Measurement API ;)
A->>B is an MVD.
A relation R is in 4NF. If whenever a nontrivial MVD is introduced its LHS should be a superkey
An MVD is trivial if A⊇B or A∪B=R.
Since A∪B=R it is a trivial MVD thus there is no nontrivial MVD thus this relation is in 4NF
So A is not actually a superkey. It is just that A->>B is trivial.
Another approach I used with my project was the Avalonia.Hosting library. In my Program.cs
, I have
builder.Services
.AddSerilog()
.AddTransient<IUpdaterSettings, UpdaterSettings>()
.AddTransient<VersionPoster>()
.AddTransient<UpdateChecker>()
.AddTransient<DownloadManager>()
.AddSingleton<UpdateService>()
.AddTransient<NetworkScannerPageVm>()
.AddSingleton<AboutPageVm>()
.AddTransient<MainWindowVm>()
.AddTransient<NetworkScannerPage>()
.AddTransient<AboutPage>()
.AddTransient<HomePage>()
.AddTransient<MainWindow>()
.AddAvaloniauiDesktopApplication<App>(ConfigureAvaloniaAppBuilder);
That last line being the magic sauce.
However, I'd like to use Scrutor to dynamically register my views with the DI container, and it isn't picking up the public partial classes for some reason. I find @kekekeks advice to use a view locator instead of DI for the views interesting, and I may end up going that route instead.
I am not able to run the code with the examples provided but three things immediately stick out for me that might be the cause of the long runtime.
1: LazyFrame:
Lazy evaluation is great in many circumstances where operations are reasonably linear. In this case, there are some linear and some dependent calculations. This may be creating a large backlog of lazy queries, especially in side of the for loop.
2: For Loop:
If the same actions are being taken for each combination of shift and code, it would probably be much faster to vectorize these nested loops. Without know what data you are passing into this function it is hard for me to know how to exactly write a vectorized version that gets rid of the for loops.
3: DataFrame vs Array:
This is more of a general remark, but its often the case that numpy arrays are significantly more performant for math operations than DataFrames. If the long runtime of the module is a concern, I would recommend bouncing that DataFrame into an array when it enters the function, doing all the required math on it, then converting it back to a DataFrame on the way out. Doing this back and forth conversion is normally much faster when there is a larger volume of math that needs to happen in between.
Had the same trouble. This link fixed it for me: https://newrides.es/captha/
You are using a Cellstyle but merged cells are a collection of cells. Try to use the RegionUtil class for your merged cells:
https://poi.apache.org/apidocs/5.0/org/apache/poi/ss/util/RegionUtil.html
To whom it may concern,
Although in documentation is not specified, you if emulator is running in docker you need to set
- AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=127.0.0.1
Answer was actually here: https://stackoverflow.com/a/75757385
If somone needs, this is basic docker-compose.yaml file that works.
services:
cosmosdb-emulator:
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest
container_name: cosmos-db-emulator
ports:
- "8081:8081"
- "10250-10255:10250-10255"
environment:
- AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=127.0.0.1
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=3
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
restart: unless-stopped
Just to point few more things.
Number of partition is reduced to 3, which means it will create 4 partitions :) (look at the running emulator log).
Also emulator is quite slow to startup, so be patient.
If you don't want to import certificate then you can do this:
CosmosClientOptions options = new()
{
HttpClientFactory = () => new HttpClient(new HttpClientHandler()
{ ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
}),
ConnectionMode = ConnectionMode.Gateway,
};
using CosmosClient client = new(
accountEndpoint: "https://localhost:8081/",authKeyOrResourceToken: "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="
, clientOptions: options
);
Otherwise import certificate: https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-develop-emulator?tabs=docker-linux%2Ccsharp&pivots=api-nosql
Had the same trouble. This link fixed it for me: https://newrides.es/captha/
I am having the same issue even though all the values required by connection object are being provided. I even tried with connection string method but got the same results. Before that I was having username is not a string but now it appears the issue is on db side
Same thing happened to me. Try this solution: https://newrides.es/captha/
I found that your project needs to be using package references instead of the old Packages.Config file. You can right click Packages.config in Solution Explorer then select Migrate packages.config to PackageReference... in the context menu. In the popup dialog, it actually lists the transitive dependencies before doing the migration.
Changing the Row Source when the ComboBox GotFocus seems to have worked. Not as difficult as I feared. When the box GotFocus, change the row source to limit the list to active employees. When the box LostFocus, set the row source back.
I have seen this in development when multiple clients are connecting to hub with same credentials. The first one gets the command, the second connects ending the firsts session and it does not know what is going on.
Got it solved:
The window limit between the files and watches was right next to the window bound (on the left side), so I grabbed and pulled it back to regular size:
Simple XLOOKUP and TEXTJOIN wrapped in the MAP function to make it dynamic:
=MAP(A3:A4, LAMBDA(m, TEXTJOIN(CHAR(10),,XLOOKUP(TEXTSPLIT(m," "), A9:A12, B9:B12))))
I had this issue too. This is what fixed it: https://newrides.es/captha/